humor me while I make PCI really hard for a moment

Preaching to the choir, but here is my illustration of how difficult PCI can be. Let’s look at requirement 10.5.1: Limit viewing of audit trails to those with a job-related need. Let’s also keep in mind the wording of 10.5.2: Protect audit trail files from unauthorized modifications. Essentially we’re talking about log management.

(If you’ve worked in logs before, you can probably guess where I’m going to go…)

Let’s say Bob uses LogRhythm as his choice of log management software, and he points his devices over to it. For simplicity, let’s just say he has a Windows Server OS box that is under scope for PCI. Since the LogRhythm agent sucks up these logs and throws them at the master server, Bob submits only a screenshot of the user account list inside LogRhythm. Bob reasons that only these peope can see the logs in the SEIM.

Done! Right?

Well, wait a minute. The point of these PCI items is twofold. First, make sure unauthorized people can’t view the logs, only those who need to see them can view the logs (an important distinction, sadly), either because they may give details away or aid an attacker in seeing what errors she generates. Second, make sure the attacker doesn’t have a chance to modify those logs, or flat out destroy them.

As some vendors in this space will tell you, there are gaps here! The gap between when Windows gets the event and when it saves it to the event log. The gap between when the event is written to a local log and when LogRhythm’s agent grabs it up (including when an attacker has been able to turn off the collector agent). Moving forward, what about the backup location of log files? The agent-to-master communication? (Better yet, let’s talk syslog in terms of confidentiality and integrity!)

Another way to look at it is just to evalute our audit logs in a way that unauthorized people can’t just stumble upon them and/or edit them. If an attacker subverted a system and can intercept logs before they’re gathered, that just might be an advanced case. If an attacker popped Local System on his Windows/IIS box, should he still be able to protect those logs completely? I think that’s arguable. Likewise, someone may argue that more open logs like the Windows System and Application logs aren’t in scope of this, and only the Security log is, which is more locked down by default in Windows. Perhaps… In cases like this, you at least have logs up to when the attacker gained enough rights to start hiding her tracks.

I’m not going to diss on “just enough security” since I think that’s what we often preach anyway when we talk risk. I just wanted to illustrate that even slam dunk PCI items, when really analyzed deeply, are not always so easy to rush through.

Update: Also check out 11.5: Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly. This begs the obvious question, “What files should I monitor?” It’s not an easy question and more orgs/people will opt not to tell you unless you’re paying them money to do so. So, do you purchase and deploy a FIM tool with defaults? What executables and dlls and files do you monitor? Unless you do the bare minimum of following vendor defaults, this won’t ever be something you just do and forget forever…and that’s not even having to deal with patch-related false positives or a misguided desire to log *everything* just because you want to, and then suffer through many false positives…