my user education rules of thumb

My RSS reader is getting swamped because I’m behind. In trying to catch up, I see more QQing about user education (either lack of support or lack of value in it). Here are some of my personal guidelines about user education in regards to enterprise security. These are not hard and fast rules, but simply general guidelines for me.

1) User education helps inform users of and explain corporate policies and technical controls. A workforce that doesn’t know policy, can’t follow it. A workforce that doesn’t understand why a control is in place, will fight against or around that control.

2) User education helps those who truly want to do the right, secure, safe thing. Some people are quite open and actually thirst for this knowledge, both for work and at home. This is not all people especially when push comes to shove and the “right” thing means not doing the “easy” thing in your job. E.g. It is easy to just email that client the necessary SSN-filled spreadsheet than figure out or set up a secure transfer method via “encrypted” mail, encrypt mail, or SFTP. (Yes, I meant to list three things there…)

3) User education fills in the gaps that technical controls cannot adequately fill. There are security problems that simpy cannot be solved very well with technical or procedural controls. A salesman talking in the airport on his cell phone about confidential business plans can be overhead, and there’s not much you can do about that. Or it may not be technically possible to add more physical security to your building if you don’t own it. But user education can demontrate that the business is not negligent about such issues, and the user may change his behavior after such education (see #2).

4) Technical controls are more valuable than user education. To mitigate a particular risk, if the value of the technical control roughly equals that of the user education control, and they cannot add to each other, then the technical control should win out. While user education has value, it does not ensure anything. Even I, as an informed and careful sec geek, would rather not have to make judgement calls or risk mistakes dealing with a strange attachment. I’d rather it be stripped early, not delivered to me, or my system not vulnerable (patched, least rights, hIPS…).

5) User education is worthless without technical controls. This follows from some earlier points, but imagine a company that has little to no technical controls and relies on its workforce intelligence to be secure. At least with technical controls, there is some assurance of a certain level of unattended security, assuming good configurations and settings. With technical controls, you can trust and verify. With user education, you have to trust, measure, and generalize.

6) User education is especially valuable, nonethless, to the people who decide technical controls. IT and security staff need continued training. IT and security staff neeed continued training. IT and security staff need… We can’t make things right unless we know how to make things right. From developers to IT professionals to managers, the technical people need technical training. Part of “baking in” security is about kneading in the knowledge.

Parting thoughts: This is not to mean I think user education is worthless. I think a proper security approach blends both user education (along the guidelines above) with strong technical controls. I simply think the drink is more like 1 part user education to 9 parts technical control.

scripting games entries are posted to wiki

I am again posting my entries to the scripting games over on my wiki. Due to time constraints, I’ve not been able to devote myself at all to the Perl side of the events, but I have completed all the PowerShell ones (I’ve not turned them in yet and will do so as the deadlines approach). I also decided to try the Sudden Death stuff. Not sure why my first one scored 0, but I emailed them about it. I think me sending it in 1.5 hours before the deadline may have counted against me.

Overall, very fun exercises this year, and I get to learn more about PowerShell. I think the events are just a bit more complicated than last year, which is truly welcome!

Event 1 involved creating a word out of the letter conversion of a 7 digit phone number. Rather than stop at finding an answer, my script finds all the possible ones and just returns the first one.

Event 2 wanted to average scores from a text file and echo the top 3. This was pretty routine, and maybe one of the easier Advanced events.

Search: query for ‘sadly’

For some reason, just about every hour, the IP address 66.249.67.139 (crawl-66-249-67-139.googlebot.com) hits my site and does a one word search on my site. They’re rarely meaningful terms, although now and then it searches for some relevant to my site. Weird.

ms08-006 analysis by hd moore

Yesterday I mentioned the severity of MS08-006. Last night, HD Moore posted an analysis of this patch.

This is a server issue, and is only enabled by the use of certain coding practices that are not bad in and of themselves. Considering most admins have no idea what code is going on their systems, either from internal developers or third-party web products, this patch should still be critical for servers. I assume a purposely vulnerable and dangerous asp file will be released in the next few weeks that I can copy, put on a server, and auto-pwn it in some way (shovel a shell over asp?).

to be critical or not to be critical

Microsoft is in a not-so-enviable position when it comes to patch releases. Microsoft yesterday released MS08-006 as one of their slew of patches. They rated this issue “Important.” But if you look closely, it scores the highest severity in every category except one: number of systems affected. But if you have servers affected, this is about as critical as an issue can get, other than having it already worming around.

This sucks because techs like me want the real skinny, but we all know media will latch onto “Microsoft released a critical patch…” and drop off the, “…that only affects…” part. And then people like my managers of stakeholders on the systems in question will say, “But Microsoft themselves only rated this Important, surely you can slow down…”

There’s really no answer here, and I think Microsoft errs on the correct side, since I can figure out for myself that the issue is critical (assuming Microsoft continues to be detailed in their descriptions), but the common public is less likely to figure out the issue doesn’t matter to them. Still, it is a lame situation. Maybe Microsoft should only apply an overall severity to an issue only after identifying the affected products?

Or do what SANS does and split them up between client and server ratings. These are general enough and make damn good sense.

dkim biggest improvement in smtp in decades?

One of the biggest failures of SMTP (email) is the ability to spoof the sender, i.e. repudiation. I’m a firm believer in the ongoing death of email.

But I see there is still room for improvement. DKIM, DomainKeys Identified Mail (just covered by NetworkWorld), appears to use DNS-stored keys and signed mail using those public keys to verify the sender of email. This will only be as strong as private keys are kept private, the IT techs don’t fudge their mail server configs, and a fake signature can’t be imbedded into the mail and pass any checks (email clients [or MTAs] will have to flag fake ones, since us humans certainly can’t tell).

I had no idea about DKIM until today, but it definitely sounds like a move in the right direction. Will it save SMTP? That I don’t know, but it should certainly reinvigorate it in the business world. I do plan to work DKIM into what I do, but it will only be after/during my email server overhaul/migration from a Windows app to Postfix (most likely).

tippett on security approaches today

I had posted about the article from Tim Wilson (DarkReading) giving a blitz of opinion from Peter Tippett, but deleted the post. I got the link from Rothman, and now I see (as I catch up with the news) Hoff posted as well. Shit, I guess I will repost, especially as I can fully empathize with Hoff’s feelings “flip-flopping between violent agreement and incredulous eye-rolling from one paragraph to the next.” I also deleted my post because I really had no idea who Peter Tippett is.

Tippett compared vulnerability research with automobile safety research. “If I sat up in a window of a building, I might find that I could shoot an arrow through the sunroof of a Ford and kill the driver,” he said. “It isn’t very likely, but it’s possible.

“If I disclose that vulnerability, shouldn’t the automaker put in some sort of arrow deflection device to patch the problem? And then other researchers may find similar vulnerabilities in other makes and models,” Tippett continued. “And because it’s potentially fatal to the driver, I rate it as ‘critical.’ There’s a lot of attention and effort there, but it isn’t really helping auto safety very much.”

I sometimes use such analogies myself, but I think it is important to not lean too heavily on such analogies. The analogy above ignores the ease and efficiency of digital attacks. This analogy would be more accurate if I could shoot many arrows randomly, build arrow-firing machines in any place I want, and recruit others who can easily build and deploy such devices. If this occurred with the efficiency, impersonality, and ease of a digital attack, you bet it might be a concern for Ford. Likewise, such arrow attacks may impact just the drivers and a few nearby cars; a data disclosure or cyber attack could affects hundreds or thousands, for years.

I also took exception with that might be a problem with condensing Tippett to a few hundred words, or might mean Tippett needs to do a little soul-searching on how he wants to approach security.

But if a hacker breaks into the password files of a corporation with 10,000 machines, he only needs to guess one password to penetrate the network, Tippett notes. “In that case, the long passwords might mean that he can only crack 2,000 of the passwords instead of 5,000,” he said. “But what did you really gain by implementing them? He only needed one.”

versus

Tippett also suggested that many security pros waste time trying to buy or invent defenses that are 100 percent secure. “If a product can be cracked, it’s sometimes thrown out and considered useless,” he observed. “But automobile seatbelts only prevent fatalities about 50 percent of the time. Are they worthless? Security products don’t have to be perfect to be helpful in your defense.”

What the hell is he trying to conclude here? I could be reading more than he is intending, but hopefully he just wants to say we need to think more about the value of these measures. It just struck me as odd that he takes two rather opposing positions there. Both approaches don’t secure 100%, but in one case he questions the value and in the other condones it.

asus eee pc rootable by default

The Asus Eee PC (the official page is way too flowery to link to) is becoming a bit popular amongst colleagues for the low price and small footprint. It comes loaded with Xandros by default. Via the Full-Disclosure mailing list, it appears the device comes shipped with a rootable version of the Samba daemon. Doh! Props to RISE Security for finding and posting about this.

If you’re like me and have not jumped on the wagon of the Asus Eee, it might be worth waiting for the second generation in April (from the Wikipedia article).

If you run a network that you want to be hostile to outsiders and you don’t use Asus Eee’s, you should be able to add passive/active rogue system detections to automatically trigger this rooting should a system be plugged in. Detect, root, wipe, see who screams later.

my current thoughts on the state of antivirus

I still maintain that AntiVirus software is a necessity for computers these days. But after reading some thoughts from Michael about AV, I’m wondering if my long-standing Top 5 Security Step is less and less founded in rationality. As a quick summary, I’ll say that AV is dying in the enterprise, but as a consumer protection, it is still an easy and easily understood suggestion. In the enterprise, AV is simply evolving either migrating into other layers or into things like HIPS. As a bottomline, be open and think about the role of AV in your situation. I expect (and welcome!) strong reaction from Wismer on any holes in this post! 🙂

(I run AV on my home Windows boxes. I also use it on my mail gateway. My Linux boxes do not run AV. At work, we use AV and soon HIPS on all systems, and we’re a fully Windows shop.)

So what is AV supposed to be doing? Well, it is supposed to block, detect, and clean various bits of malware from my system. It does this in realtime and with regular scans.

  • Signature-based– Everyone digs on AVs signatures being a limiting factor. This is true and is illustrated by the TSA no-fly lists. Jason Bourne’s name appears on this list. When Jason Bourne attempts to board an airplane, someone compares his name to that on some ubiquitous list of baddies. What if Jason changes his name to James Bourne? He’ll get through. What if there is another, completely innocent person named Jason Bourne? He might get denied access. Signatures work no better, really. And what if his name gets printed as Bourne, Jason? This is a bit like a file getting scrambled or encrypted a bit. It still works, but might not exactly match the signature list.
  • Protects against email-borne malware– AV protects against bad things sent via email. The problem here is threefold. First, many users are slowly getting used to not clicking random files in emails that they didn’t request (slow but sure!). Second, mail servers and gateways are getting better at stripping bad attachments and files. Third, any brand new threats that attack otherwise trusted files like pdf or doc, are no better stopped by AV at the host than the AV at the gateway. I’ve found our third-party spam filter provider is far better at detecting and scrubbing and reacting to spam and new attacks than we ever could hope to be (part of the outsourcing trend of security commodity services).
  • Protects against network-borne malware– AV protects against bad things banging against and entering the system from the network, via network shares on the host or the host connecting to network shares. This can also include old exploits that pop vulnerable services/stacks in Windows or Windows-borne apps. We’ve not seen a huge number of these like we did 4+ years ago. The network is getting more protected as the OS incarnations become more solid (arguably) and network security matures. Firewalls, IDS/IPS, gateway AV, and even simple router ACLs/NAT keep a lot of things safer than they used to be. We’re also getting better at detecting when something bad is circulating on the network. I believe all of this progress is not due to technology, but the slowly incrementing of technical experience and expertise in the enterprise and commercial tools. All of this means AVs use to protect against network-borne malware is a bit more redundant.
  • Protects against web-borne malware– This is my more dubious claim, but I don’t have a feeling that AV protects me all that much from the various web-borne attacks. Sure it can detect and maybe stop the big ones, but there are innumerable ways to write such malware. I’m just as worried about the targeted attack from a niche hacking site I visit as the Super Bowl page with some generic dropped script. Things like web filters and HIPS and limited rights help the enterprise user. Things like non-standard browsers and NoScript types of add-ons help home users. I think the impact of AV on this vector is diminished.
  • Keeps the system running smoothly– Malware still bears the telltale trait of slowing our systems to a crawl, in many cases. We don’t like this. It soaks up productivity, increases user frustration with technology, and can harm the system itself up to overheating or simply an unrecoverable OS. Other security factors have been pushing data to be more secured and available, especially in backups or on the trusted networks. This means the physical end point is becoming more expendable as the least costly of our worries. Likewise, a pwned system with lots of malware can simply be rebuilt in such an environment, with little real loss. Home users are typically not as lucky in this regard.
  • Protection against known attacks– My problem with this sort of an assertion is twofold. First, protection is against only known attacks, not bleeding-edge unknown ones. AV is not the only victim here, since the attacks *are* unknown! Likewise, the inverse is true, AV protects against known attacks no better than protections in other layers, like the mail gateway or web filter. Second, keeping systems and applications patched (always easier said than done!) should also protect against known attacks. I would never happily justify slack patching due to AV protection.
  • Provides security in untrusted networks.– I’ll argue that this is still true, but also reduced and probably eclipsed by a good bi-directional firewall and HIPS. It’s a fact of life that computers can now move at will from the trusted network to untrusted ones. Even if your laptop usage is small, it helps to just treat everything like it is mobile. While AVs role is diminished by edge and perimeter security measures, those are gone in an untrusted network.
  • Keeps the computer safer from human stupidity– There’s a reason this bullet is last: it’s especially important. Users can still make mistakes, and it really does help to catch those mistakes. Even if they happen and detectors raise alarms, I’d rather know something is borked than not know it. I really see AVs main purpose these days to be protecting against human error. Yes, other tools and approaches like limited rights and HIPS can do the same thing, but at least AV is easily accessible to home consumers, and more understood. If a malware from 3 years ago gets sent to my users, I can expect one, someday, to accidentally click on it (come on, we’ve all accidentally run something we didn’t mean to at some point!), and that’s the safety net AV maintains. I’d rather my parents run AV than a FW or HIPS and not know whether to allow an action or not.

    While I feel, personally, that the role and importance of AV in the enterprise is dying or greatly diminished, I would not recommend any shops abandon AV without doing a couple things.

  • Replace the AV with something– Chances are this will be a HIPS product, but replace it with something. I don’t think I’m fully ready to strip the host of third-party protection or leave it with just firewalls in place.
  • Examine your laws and regulations– Does some regulation specifically require AV to be present (PCI)? Then you have to keep it, really. You might also have to make an extra good case to your lawyers or mgmt teams; AV necessity is pretty deeply ingrained now.
  • Examine your defense in depth– A lot of the usefulness of AV is being eroded by layers of defenses and replacement products. Sure you can replace AV with HIPS, but don’t argue against AV if you don’t have network perimeter and edge device protections to stop malware from entering the safety of your trusted networks. Make sure you still have confidence in your other mitigating security measures.
  • Prove the value of the alternatives or the invalue of AV– Set up some tests with your techs to evaluate the real benefits of AV. Granted, I doubt your results will be publish-worthy, but try to understand what gets by the AV and what gets by HIPS if that is your alternative. Scrape your spam filter for bad files, put them onto a box with both products, and attempt to run them. Try to run them on an unsecured box, and see if you can push or install the products after the infection. And so on. Understand what you’re replacing, so that you can be more confident with the added or decreased value of your decision. Or have your vendors/partners do this for you. Maybe HIPS will provide additional benefits like perhaps an inbound firewall or other alerting mechanisms that go beyond just AV actions. These tests may go a long ways to garnering you support in the enterprise.
  • the shared responsibility for availability

    Research has claimed that businesses are now more concerned with “availability” than they are security. I’m not surprised by this since the availability of technology is a shared role between IT in general and the security team (as part of the CIA triad). I”d like to point over to some ongoing discussion at Farnum’s Place. Feel free to chime in!

    So, when did security eclipse availability? I think availability, by its nature, always has to be first. Or maybe an integral part of the security posture (again the CIA triad) and not broken out separately. Regardless which of the above two is correct, this breakdown makes me wonder at the vailidity of this research, or at least the article presenting it.

    security religions

    DailyDave has scared out an interesting mini-conversation about Security Religion. I call this Security Religion because the argument centers on some very fundamental beliefs that security people have when combating the evils of the cyber world. It is extremely important in passionate discussions to realize which religion speakers are siding with, to avoid circular arguments that get nowhere. Some discussions have no correct answer, but there can be no chance of agreement due to differences in fundamental assumptions (kinda like someone claiming their religion as ultimate because it says so in the Bible, but their audience hasn’t bought the assumption that the Bible is divine…). The argument is in the assumptions, not the resulting assertions.

    I have purposely striked most of the content below, since it is just me being wordy and unnecessary.
    Absolute security vs incremental security.

    Absolute Security followers accept and pursue security solutions that are inherently secure or absolutely secure. Something that is inherently secure may not be absolutely secure right now, but is as secure as it theoretically can be at this moment.

    Absolutists may often define security as something much closer to a state, where things are highly secure. When something is adding security, they mean that it is in a state that is not breakable. They may say that security is not a state to achieve, but only insomuch that zero days can be found and patched against; i.e. new attack vectors and threats that aren’t known today. They don’t spend excessive amounts of time, money, energy, or political clout on solutions that have weaknesses or holes in them. With this approach, they tailor their security approaches towards even highly skilled threats, internal and external.

    Perfect security seems like an impossibility, meaning these people will have very few solutions and very few good feelings about their security. They shouldn’t use Windows, as this violates the fundamental belief (since Windows can be inherently insecure). Absolutists may be unable to provide satisfactory solutions without an overflowing budget, support, and staff. Absolutists do not manage risk, and would rather try to remove all risk. They put heavy emphasis on technological controls, since people are fallible and make mistakes. Absolutists will overlook small security measures that stop unskilled attackers or automata, but would fail against a skilled attacker.

    Example A) Absolutists will argue against the benefit of changing the listen port of an SSH server, and instead prefer to harden the SSH server itself.

    Example B) Absolutists will likely argue against the value of IDS or other detection solutions. Attacks should not succeed in absolute security networks, therefore this is wasted time. Caveat: detection may be suggested as a tripwire for zero day attacks or unknown things.

    Example C) Absolutists scoff at the notion of MAC address and SSID hiding controls in WAPs.

    Incremental security means acknowledging that security measures are not perfect, especially in an imperfect world with imperfect humans as the base of any security regimen. Therefore, they believe that layers are the best approach. Sometimes this means, “any security is an improvement.”

    Incrementals acknowledge that there are no perfect security measures, and can plan around those deficiencies. Incrementals tend to define “security” as a measure on a scale between ultimately secure and ultimately insecure. They have a more realistic outlook, which means being able to work with tighter budgets, lack of staff, and less efficient tools. Incremental belief lends itself to a risk management approach. They almost always accept that security is an ever-changing process and not a state.

    An Incrementalist may waste time applying various imperfect layers of security to compensate for the imperfections. They may get mired in always fighting an uphill battle; causing burn-out, frustration, and never-ending politicking to get projects approved and accomplished.

    Example A`) Incrementals believe there is some benefit to changing the listen port of an SSH server.

    Example B`) Incrementals will be considerate of IDS and detection measures as a way to alert on possible or successful attacks.

    Example C`) Incrementals will argue that there is some value in protecting wireless networks by disabling SSID broadcasting and using MAC controls.

    There is a time and place for both security religions. This can change based on the organization’s resources, threats, or assets. A government defense facility may side much deeper into the Absolute Security, but a web development start-up may be best served with an Incremental approach.

    I’m not saying either religious side is better or worse. I think it depends on the personality and environment. Hell, I would also be keen to say it can depend on the solution or situation. You might be forced to be Incremental in your desktop OS and shared servers (think web or SQL), but you’d be damned to budge from being an Absolutist on the network or servers that only you use (think DNS or mail).

    fundamental honeypotting paper

    Recently read the paperFundamental Honeypotting by Justin Mitchell. Scored this link from Andrew Hay.

    As is typical of most SANS GIAC papers, the writing and layout is a bit rough at times, but I really dig the amount of information Justin presents about beginning honeypotting. I won’t litter this post with links, since the paper is filled with great links. He talks about Nepenthes and Bubblegum open proxy as the main honeypot tools. He also discusses the use of iptables and tc (traffic control), Snort, Swatch. Hell, he also has some useful tidbits about detecting whether a system is running as a guest VM or not.

    I became just a little more convinced about the value of a honeypot, but not enough to ratchet that up my list of projects to do at home. It’s there, just not very high since it is more a curiosity to me since I don’t really do active malware research.