zeltser’s tips on detecting web compromise

Lenny Zeltser goes over 8 tips for detecting website compromise. There are far too few security writers who have technical chops enough to not stand out amongst a pack of geeks at a security conference, but Lenny is one of the better ones, so any criticism I have for his writing is done very lovingly. Snuggly-like, even. I figured I could make mention of this nice article and give my own reactions.

Lenny starts out with 3 overly obtuse best practices. Sort of like saying, “If you want to get into shape, you should run more,” which *sounds* easy. I think he takes these items like I do: you’d be remiss if you left them out, but can’t do them justice by being less obtuse about it.

1. Deploy a host-level intrusion detection and/or a file integrity monitoring utility on the servers. – This biggest problem with this (collective) item is how much activity this is going to generate for an analyst. And if you do tune it down to a quiet level, I’d argue that you’re not going to see what you want. But, as said already, a necessary evil to a security posture, whether you like it or not. At a bare minimum, you should know every time a file is changed and a new file appears in your web root (with exceptions for bad apps that need to write temp files and other crap to itself- the bane of web root FIM…).

2. Pay attention to network traffic anomalies in activities originating from the Internet as well as in Internet-bound connections. – This part, “Internet-bound connections,” really needs to be implemented as soon as possible before an organization has so much going on that you can’t ever close it down without preparing everyone for the inevitable breaking of things no one knew were needed. Watching traffic coming in? Well, not so easy, and you’ll probably just end up looking for stupidly large amounts of traffic (which may be normal if you service a large client sitting all behind 2 proxy IPs) or the most obvious Slapper/Slammer IDS alerts. But you absolutely want to know that you just had 500 MB of traffic exfiltrate from your web/database server to some destination in the Ukraine, and it contained a tripwire entry in an otherwise unused database table. I would drop the buzzword, “app firewall,” in this item as well. You should also know what is normal coming out of your web farm (DMZ), and anything hitting strange deny rules on the network should be checked into.

3. Centrally collect and examine security logs from systems, network devices and applications. – Collect: Yes. Examine: Yes, with managed expectations. I really want to say yes, but having done some of this, 99% of the examined stuff that bubbles up to the top is still not valuable. There’s a reason I think gut feelings from admins catch lots of incidents and strangeness on a system/network, and it’s not because it shows up clearly in logs. Especially with application logs, if you want them to be trim and tidy, you’re looking at a custom solution which includes custom manhours, overhead, and future support resources.

Side note on custom app logs: If you can, log/alert any time a security mechanism routine gets used, for instance if someone attempts to put a ‘ into a search field (WAF).

4. Use local tools to scan web server’s contents for risky contents. – Certainly should do this as much as you can. You can scan for even deeper things like any new file at all (depending on the level of activity normally on your web root) or files created by/owned by processes you don’t expect to be writing files.

5. Pay attention to the web server’s configuration to identify directives that adversely affect the site’s visitors. – This can be a bit easier in Apache or even IIS 7.0+, but some web servers like IIS 6 are hideous for watching configurations. Yet, definitely keep this in mind. Thankfully, attackers have extra hurdles than just writing a file in the web root, when it comes to such servers.

6. Use remote scanners to identify the presence of malicious code on your website. – I agree with the spirit of this, but I think internal scanners need to be done as well. If a bad file is present but not discoverable from an existing link on your site or easily-guessed name, then an external scan will totally miss it.

I should take a moment to stress that many of these items include signatures or looking for known badness, but none of that should replace actually looking, at some point, at every page you have for things that are clearly bad to a human, such as a defacement calling card or something.

7. Keep an eye on blacklists of known malicious or compromised hosts in case your website appears there. – It’s hard to say this shouldn’t be done, but it certainly offers little return on the investment of time. If you *do* happen to show up before any other alarms sound, then clearly it is nice.

8. Pay attention to reports submitted to you by your users and visitors. – I’d personally overlook this item 9 times out of 10, but it is really oh so necessary.

If I wanted to add an item or two:

9. Scorched earth. – While not a detection mechanism, you do run certain risks if your web code sits out on the Internet for a long period of time and grows stale. You should refresh from a known good source as often as you think you need to. This can include server configs and the like. (For instance, I roll out web root files every x minutes on my web servers at work, and I regularly wipe out web server configs and rebuild them via automation.) Don’t be that company that had a backdoor gaping open to the world for 2 years, when a simple code refresh would have closed the hole. A diff comparison mechanism may satisfy the ‘detection’ criteria of this list.