complex apathy won’t go away

AndyITGuy has a piece up on his blog: No incentive to end apathy. I agree with some of what he says, but I’ll also take a hastily-composed counterpoint for the sake of it!

In it, he says, “In many other parts of our life we are expected to know and do the right thing and if we mess up we pay.”

…unless you want to sue someone and make *them* pay! We have a very ‘point-fingers-first’ attitude when things go wrong, these days. I’m not saying that’s healthy, but that’s what I see, largely pimped by mainstream media.

Andy ultimately takes the position that we have a problem where users are not taking personal responsibility for their online activities because entities like the banks are removing the incentives to do so. (I’d say that’s like expecting children to learn proper behavior when you never actually punish them for anything…except I have no children so I better not go there!)

I always get back to my two core analogies: car maintenance and home security. I’ll try not to belabor those examples, but the core idea is that these topics are not difficult to think about, to attempt to solve, or understand the costs/risks. Yet still, many people put both their car (and lives/others) and their personal safety at risk.

And we want to think they can understand and ever act accordingly online? I don’t think so. One thing I continue to need to come to terms with is my proximity to all things cyber since I was 15 years old or so, or more appropriately, being interested in security since about 23. So many of my family, friends, coworkers still just don’t understand it. And by it I mean both security and how the Internet works, or the dangers therein, or how attackers can do things. These are still advanced topics to those who barely grasp the fundamentals.

Even setting that aside at the moment, the problem gets ever more disheartening when someone does some commonly smart things to protect themselves, but then makes one mistake and they’re pwned. They hop on a wireless network at a con, accidentally click a link that takes them somewhere, or legitimately click a link to a place they wanted to visit but didn’t know is just a flytrap or already pwned themselves. Or they write a sex thesis and think it is just a private joke shared with a couple friends but winds up posted to the world. What is one to do?

I’m certainly not saying it is hopeless to educate the masses, but I can’t envision them ever “getting it” enough to put a dent in the problem. Still, I’m not happy either for the shielding people get for their personal failures. *shrug* This is a thorny topic…admittedly.

I wonder if the Middle Ages had any equivalent problem. “Stop buying that useless ointment from that trickster! It’s making you sicker!” “Stop paying for those pardons, God doesn’t need your money!”

I will say one thing: If someone mugs me on the street, do I get to blame someone and they will repay my money? Or if my house is broken into, what recompence am I entitled? What if I didn’t opt for suitable insurance? Will I just get a lecture on how I shouldn’t have been walking on that street at night or should have had some guard dogs?

really enjoyed reading the verizon pci report

I previously mentioned Verizon’s new DBIR supplement related primarily to compliance (PCI) efforts they see, and I’ve finally gotten to read this very fine piece of work. So, my thoughts follow! If the report is too wordy for some reason, skip to page 10 and read the 12 requirement sections. (I had some PCI-related reactions to the DBIR already.) I really think this PCI Report (PCIR) is well-written and contains some very realistic and practical observations and information.

Requirement 1 (firewall configuration) – The PCIR sounds pretty accurate in why orgs don’t meet req 1: poor reviews of firewall/router rules, proving those reviews take place, documenting business cases, and poor egress filtering. Poor documentation could be *greatly* helped if firewall products allowed any sort of comment/description field with enough space to be useful. As it is, it seems rare that firewall rules can begin to be documented inside the actual tools. Tracking and reviewing this sort of documentation becomes a huge task when talking about more than a few firewalls/networks.

Requirement 2 (vendor defaults) – I’m not surprised by the 2 of the 3 big reasons for challenges here: harden systems by turning off unneeded services/functions and document why some can’t be turned off. These go into having a very mature system build and configuration process. It’s way too common for people to say, “I’ll harden this system,” but then just look for some magical checklist somewhere that says what to turn off. This is a beast to tackle for the first time.

Requirement 3 (stored data) – Also am not surprised here that the big hang-ups are key management and the encryption of things outside the static databases. Key management is probably foreign to most teams, and not something anyone wants to handle, especially if encryption is done inside the tools and maybe not easily changed. I really think Verizon hit on one issue in the PCI DSS: there is data in transit and data at rest, but I believe there are two subsets of data at rest: data not in use and data actively in use. Obviously the latter is the problem child.

Requirement 4 (encrypted transmission) – I am glad the PCIR mentioned 4.2. I imagine that one is very often compensated for and/or over-looked (or just flat-out specifically ignored as being too small a trickle to be impacting!), despite actually being violated. It is hard to test for, and is entirely encumbent on the host org to expose this. Even an internal IT team can’t really answer this alone without DLP-scanning-type technology, let alone a time-bound/access-bound auditor.

Requirement 5 (AV software) – I’m still partly surprised this isn’t higher, but also surprised this isn’t lower. While everyone should have AV everywhere, I’d be curious how many shops really are staying on top of any systems that get stood up without it being installed or systems with broken agents or outdated sigs for whatever reason…

Requirement 6 (dev and maintenance) – Patching. I wonder how much subjectivity still goes into this? Quarterly updates may be too long for some auditors to recommend. And what about applications, pieces of web applications, and network gear? It is not uncommon for a networking guy to only do updates when necessary as opposed to having an outage every 2 weeks for a new Cisco IOS version actoss 100 switches. Developing web apps securely is something I would still consider a foreign concept to most developers and development teams (inclusive of mgmt/leaders). It is even more foreign for said teams to update third-party pieces *inside* the apps. Hell, most don’t even document their presence! On the flip side, I think this is very hard for an auditor to be truly thorough in checking out.

Requirement 7 (logical access) – I agree with what the Verizon PCIR discussed on this topic. Beware anyone who bandies about the term RBAC. It’s easy on paper, but difficult in practice. Anyone who has managed accounts or Active Directory knows this requirement is nasty if someone gets serious about it. But it is oh-so-refreshingly easy when in place and enforced.

Requirement 8 (unique IDs) – I think the challenge in this item is likely all the web apps and other things with unique IDs, like the PCIR mentions for networking devices. That, and the regular rotation of things like service account passwords. Such tasks are almost always outage-inducing, and business/IT tends away from outages.

Requirement 10 (tracking and monitoring) – I think this whole requirement is often foreign to IT teams. Sure, some do figure out central logging, but that’s really just for operations and troubleshooting. Everything else can be a huge new capital expense or ongoing budget item. And application logging is often done only insomuch as it helps a developer troubleshoot something, and does not adhere to any standard. FIM is also new, and again, a huge beast if someone takes it properly seriously.

Requirement 11 (regular testing) – This requirement falls back into the category: new expense. Not easy for teams to swallow. Even a FIM is not something you just find a checklist for, buy a tool, turn on, and forget. Vuln scans are not turn on and forget. Show me a clean vuln scan and I’ll show you how it is misconfigured. You should have pages upon pages of false positives to deal with, or real issues that need addressed.

Requirement 12 (security policies) – Incident Response is a new item in many SMBs. Likewise, there may be a discrepancy in policy and actual practice partly because of who writes them. Policies can be often written by auditors, management, and outside parties paid to craft them. But they need to be made by, or in close association to, actual IT workers who do the work. I think req 12 is very often just outsourced and then never really followed. Even my own organization falls into this trap with some policies that make me wonder if anyone understands exactly how much work it is for me to ethically adhere to some policies while balancing business needs.

cyberraid 0 red team event recaps at hir

(Cool, my [work] web filter isn’t blocking HiR as criminal hacking anymore. Sweet! [Yeah, I know I can make exceptions in it since I control it, but I don’t. This is *one* reason why I’m so late in seeing these!])

Ax0n and Asmodian X have posted some excellent thoughts on their experiences during the CyberRAID 0 event in KC. I’ll follow with a couple thoughts of my own.
Ax0n (blue team): part 1
Ax0n (blue team): part 2
Ax0n (blue team): part 3
Asmodian X (red team)

Egress filtering. Firewalls were sexy 10 years ago. Ask any pentester today and they’ll say external scanning is usually pretty boring now. But for as far as organizations have come with ingress firewall filtering, far too many still suck horribly at egress filtering. I really like further evidence of that value. Yes, it’s hard to get going in a production network without making mistakes and ‘discovering’ business requirements the painful way…but this is one of the higher value efforts that many organizations still leave undone.

Pointy-Haired Boss. While many business requests end up *becoming* reasonable with some communication of the risks/costs, there are still plenty that just defy explanation and may nearly get put into place anyway, despite being bad, bad, bad. I’m glad Ax0n brought this up in part 2. Sometimes a little deception is used, to keep risks properly handled.

Defense is tough. This is an old horse, but still worth flogging. Defense involves not just fighting with attackers, but also keeping your own facilities up and properly working (scoring), backups and recovery from incidents, meeting business demands, inheriting things you didn’t create, and even learning brand new things (e.g. Asterisk) because, well, you have to. Not to mention all the soft-skills that come into play.

Attacking threats is tough. I support people in positions where they are able to actually attack threats, but most business is not in that position. The reality for most organizations is exactly what Asmodian X said, “Law enforcement is worthless unless you have done the leg work and provide them with useful information.” And yet, look out when the attackers start collaborating!

incomplete: on running faster when chased by bears

Common security analogy: “When you’re chased by a bear, you just have to run faster than the guy next to you!”

I continue to hear this analogy, and like pretty much any analogy it has holes if you look too closely. So the contrarian in me gets restless when I hear it (or insinuations of it) a few too many times. Lord knows I’m sympathetic to analogies and try not to get too far beyond the spirit of their point, but the over-used ones lose that privilege eventually!

1. Assumption: the bear is rational. I’ll run (pun unintended) with this further…

2. Assumption: the bear will survey all of his possible targets and choose the one most accessible. The bear may not know all of the possible targets, or not even bother trying to make himself aware of all the possible targets.

3. The bear may not properly evaluate the targets he does see.

4. Again defying rationality, the bear may just go after whomever for strange reasons. Maybe the last target he ate that was wearing a blue vest tasted good.

5. Assumption: the bear will stop after he takes yoru buddy down. If a blanket, automated malware campaign is released, it will probably not stop at one success, but rather keep going to get as many as possible.

6. Assumption: there is only one bear. I’m pretty sure there are more attackers than just one mean ol’ bear.

7. Assumption: that you even realize there is a bear about. Let alone where he’s coming from, how fast to run, how the bear will respond, or whether the bear learned how to shoot a crossbow. (Yes, a crossbow.) The game may not be about outrunning the threat.

8. What about the bears of opportunity? Not every bear is a threat, but if you get complacent because the last 10 bears just ambled on by with barely a sniff, doesn’t mean the next one won’t take a swipe as he lumbers near. Can you tell a bear from a boar in the dark as it shuffles around? Or do you just run from everything that may be a threat…including your customers?

Blah, blah, blah. I had to get that off my chest a bit. Maybe this is a better picture. You’re in the woods. You and some buddies and about 500 other people. There are lots of animals and it is dark, the foliage is thick, noise is everywhere. There are also 100 bears. Some of these bears are large and obvious, but others kinda look a lot like your buddies or other people. Strange, I know! But the point is really that you can’t plan your security around simply being better than the others in your industry. In fact, others in your industry, strictly speaking, shouldn’t even be an influence (in reality, they are, but that is just good strategic management-thought).

verizon releases pci compliance report

Verizon has released an awaited follow-up to their annual DBIR. This release appears to focus on the correlation between data breaches and compliance to the PCI DSS. The report is near the bottom.


I can definitely say that the press release initiailly rubs me wrong for two reasons. First, I think it is obvious (at least to us) that activities that improve security (e.g. align with PCI suggestions) will, uhh, improve security. Second, anything that insinuates security via compliance sets a dangerous tone, namely that if you’re compliant you should be secure.


However! From my very superficial skimming of the pdf, this report looks much more interesting than just those two points up above that the press release seemed to salivate over. I’m also nitpicking that press release pretty hard. It might be one of those things where you see the title and opening paragraphs and suddenly start seeing red and it colors the rest of the text with that hue.

Picked this news up from Jack Daniel.

ever heard of the movie foolproof?

A few thoughts on a movie I watched this weekend that I’d never heard of before: Foolproof. Kudos to either the technical advisor or writer/director of this film for their research.*

1. Good lord, lockpicking done right?! Multiple times?! Indeed, not only do we see one of the protagonists pick several locks using *gasp* both a tension wrench and pick, but in at least one of those attempts it isn’t *double gasp* immediate! They even mention how it is taking him over 4 minutes to pick. I about fell over during that scene.

2. The protagonists are essentially doing red team activities, with an emphasis on physical attacks. To me, this sounds like a very healthy endeavor, even though they’re targeting companies they’re not affiliated with. One thing I liked, especially in their plans, is the lack of hollywood dramatic license. Or rather, diminished use of it. I appreciate that…unlike the nonsense dialogue of Swordfish or stylized file browsing of Hackers. This is more like Sneakers only without the sci-fi-esque prize (encryption decoder box).

I had more things to say last night, but I’m in a hurry this morning, so suffice to say I enjoyed the movie quite a bit, even though I see it was a terrible financial failure back in 2003. Give it a try!

* Minus one scene with the ever-present sharpening of a grainy image to read text on it. They came ever so close to pulling this off appropriately! They even had the dialogue correct and even sort of cut away from the usual telltale problem most films fall into, but you can still tell the print-out of the image is far clearer than it should be from a grainy security camera, even with some diddling to tease out some contrast to read text.

did that dead horse move? hit it again anyway!

Just another example from e-week about the trade-off between productivity and security. Too many people still act, often in an implied way, that security can be met without any impact on productivity or that productivity at any risk is justified. Or imply that security is only a technical problem that needs to stop getting in the way.

Federal executives said cyber-security measures impacted “information access, computing functionality and mobility” and reduced their productivity…

Aside: I still believe the use of just one comma in a three-item list is wrong and it bugs the crap out of me. (aka “serial commas”)

some thoughts on handling the it insider threat

NetworldWorld has a fun article up about sysadmins and the Insider Threat! (Here if print link doesn’t work to save you 4 page-clicks.) This is a decent article if you give it a chance through 4 pages, and overlook the fact that it hyper-skims over enough topics to fill a book.

“It doesn’t mean they’re guilty of anything,” Theis adds. “Sometimes they’re just trying to get the job done, but they’re outside the bounds of the organizational policy.”

Sometimes IT workers are pushed by demanding users, such as business and sales managers, to perform tasks in a hurry or to violate official IT policy by, for instance, adding printers on network segments where that’s not allowed.

Many suggestions in the article are correct suggestions, but are appropriate really for larger enterprises, and completely ignore the SMB. To its credit, the article does briefly cover some of what I consider the bedrock approaches to the topic of privileged IT insider threats.

1. Hiring practices. You’re hiring someone who may have access to your entire asset line and data. You better have decent hiring practices in place for background checks, credit checks, proper valuation, and so on. In the SMB, your admins are pretty much gods, even if you don’t want them to be.

2. Management directly. No amount of automation will remove the need for proper, close management of privileged users to determine if they are disgruntled, have pressures going on in their lives, and so on. The warning signs are almost always there.

3. Management protection. Many (all?) times IT staff are just trying to solve a problem. The management needs to be outwordly present to protect their staff from bending to those pressures. Don’t leave your employees to handle the brunt of pissed users who then return back poor customer service reports which influences staff to be more lenient to get better reviews. That’s a downward spiral that will erode security.

4. High-Level policy. There must be policies in place on what the company and management expects for architecture, security stance, behavior, and so on.

5. Standards/procedures. This is a tough one, but there should always be procedures for admins to follow to accomplish common tasks, and guidelines (along with aforementioned policies) when solving new problems. One person should not solve a recurring task in their own way which may erode security. This happens way too often. Collaboration amongst peers helps as well. In the SMB, don’t undervalue consistent verbal standards/policies. (I know, some people will argue and say policies need to be written [*slammed fist*], but I believe the verbal side has realistic weight.)

6. Peer management. No one likes a snitch, but employees are very good at sensing changes and ethics in their peers. If someone is going through a hard time, or suddenly is acting suspicious, or you get an untrustworthy vibe, handling these sorts of things should be encouraged, either through a manager or through interaction. I wonder how many “disgruntled” employees could have been helped through better relationships in the workplace.

7. Awareness of options. This article presents a nice array of options on this topic, but most of them really require additional staff and tools to accomplish, beyond the reach of many SMBs. But it is still nice to know what options are out there and evaluate if something may be appropriate.

8. Audit access. This can either be simple or enterprise-worthy, so I won’t go deep into it. But have some approach for auditing access and who has the ability to use shared accounts, and so on. This can be some quarterly manual review, a brainstorming verbal session, or something vastly larger. The point is not to be surprised by who has access to what.

Got linked to this via the Infosecnews mailing list.

packetlife community lab

Looking to get some hands-on time with relatively modern networking gear, but don’t have the money, resources, or even knowledge to roll your own lab? Jeremy Stretch has made available a community networking lab. For free! Having some real hands-on time with the gear and command line is really a key element to advancing through Cisco certs. Please think about donating even just a little bit if you find the lab useful.

an example of consumerization and the enterprise

I just today mentioned an article between Ranum and Schneier titled, “Should enterprises give in to consumerization at the expense of security?” I imagine most security folks can feel this question every week, if not more so. I had a taste already on a Monday.

Clickability is a service that allows you to email links to people. Some sites such as The Wall Street Journal legitimately partner with Clickability to provide the limited ability to share articles with people who aren’t normally allowed beyond their pay-wall. Nothing too bad, yeah? But if you go to the links that Clickability advertises about itself, you find that anyone can add a javascript bookmark and email, essentially, anything they want to anyone they want…and pose as anyone they want. Rut roh… In my organization, we use IronPort web filters, and IronPort blocks Clickability features due to their categorization as “Web-based Email.”

This is one of those grey area cases. What advice do you give?

On one hand, I can basically email anyone anything and pose as anyone. This may mean the ability to exfiltrate information via port 80 (without normal logging like outbound smtp). It might mean being able to harass an ex anonymously. Or harass someone at work! And while some may argue you need to dig a little to utilize such functionality, I would say not really. The links in Clickability advertise the ease of use, and even the barest minimum use-case demonstrates the spoofability. And while most people won’t be going out of their way in their daily lives to figure out how to spoof emails, if you put it in front of their noses, they’ll turn into criminals-of-opportunity; even if it just starts out as a practical joke to your cubemate.

In addition, an expert appliance, the IronPort web filter, is saying this site breaks policy. Should an SMB take it upon itself to make exceptions and start down that slippery road? One could argue that a major portion of the value in appliance-based web filters is not having to sift through and block sites on your own, but rather inherit what the experts say.

On the other hand, this is a borderline case for “Web-based email” in that it does not allow two-way communication. You can fire off emails, but you can’t get any in return. Likewise, you can’t send attachments.

In addition, the person making this request is a salesperson. With a laptop. And readily available access to networks not subjected to our web filtered VPN connection. So why even bother to control this? Similarly, we’re looking at expanding our mobile presence, which will further the inability to truly keep our arms around the data (assuming we’re still legitimately *in* that battle yet!).

These are big questions, and completely depends on corporate culture. Unfortunately, those with open cultures will always slowly pressure and erode those with tighter cultures. The whole “grass is greener” or “But Bob at the Club told me they decided to allow it, so we can, too!” mentality.

Often the best we (SMBs) can do is educate management as much as possible, but then roll with whatever decision is made. In the absence of regulation, I’m pretty sure there is no right or wrong answer. We could clamp down and say no, or we could stay aligned with consumerland technology.

(My advice is pretty much the above; but I would lean just slightly on the side of trusting the appliance categorizations, and as such keep the site banned. But if someone else overrules me, I won’t be kept up late at night. There are good reasons to roll with the winds of technology, many of which go beyond security.)

a ranum history of security

I wanted to repost this funny blurb from Marcus Ranum in the latest Information Security issue. As usual, the high point of the mag is the Ranum/Schneier point-counterpoint piece.

1995) install firewalls
1996) punch big holes through them
1997) announce “firewalls are dead”
1998) install intrusion detection systems
1999) turn off all the signatures
2000) announce “intrusion detection is the pet rock of computer security”
2001) install log aggregation systems
2002) ignore them
2003) complain that intrusion detection still doesn’t work
2004) worry about data leaking from the network
2005–2010) give employees mobile devices
2006–2010) give employees direct-from-desktop Internet publication capability via Facebook, Twitter, etc.
2010) give employees control of their own IT—when is it all going to sink in?

Their topic was the widening role of consumerland devices and technologies being pushed into the enterprise, while security managers freak out. The realistic point is this is how change is made, and if your company doesn’t stay on top of new tech, someone else will. Sure, your risk will go up, but it’s a corporate decision and often the best we can do is educate management on the risks/costs, educate users, detect issues quickly, and responder efficiently when they do happen. Rather than lean on the brake as in Ranum’s excellent parting analogy. Still, even being aware of all this new tech is difficult, let alone trying to tackle the security of it…

Linked by Anton for an unrelated thing.

wireless bssid used in geolocation

Post and code up on Attack Vector: Geolocation Using BSSID. Matt finally brings this home at the end with the key question: How do you get someone’s BSSID? And that’s really the key, right? Well, if Javascript can leak that information over the Internet, you have an interesting way to track people down.

I hate how movies geolocate someone using their IP address (if we’re lucky, they even get *that* technical) within seconds. Now this might be a bit more realistic (with some room for error due to proximity or overlapping BSSID names) for people on wireless and leaky equipment. Very interesting!

opening the door towards dialogue

Having just recently posted about the latest asp.net vuln, I just wanted to say I absolutely love how even non-security people suddenly poke their heads up and ask questions about issues like this when they are disclosed. Or better yet, post workarounds, issues, ways to detect these attacks, and so on. You can’t open up dialogue like this with closed-door issues…

That’s not to say I’m pro full-disclosure absolutely, but in the absence of Internet-breaking, easily-recreated issues that can be solved quickly (i.e. *really* good reasons), I tend to sympathize greatly with sharing the info rather than secreting it away.