useless notes from the verizon data breach report 2

One of the major recommendations of the Verizon DBIR is to ‘collect and monitor event logs.’ You might think this is a no-brainer, but further into the report it reveals a stupid majority of these breaches were “found” due to third-party notification (70% on pg 38).

Hell, the next two categories I would consider “lucky” events where someone noticed an issue and poked around enough to uncover the problems, so this adds an additional 24% of the breaches. In fact, only 8% of the breaches were found by what I would consider detection methods (unless the audit parts were luck too). Yuck. This means internal detections are failing or not being used properly. (Granted, the data points in this report are from people who most likely do not have strong security controls and programs in place, so these numbers might be lower than general averages.)

This morning I read about a UC Berkeley breach on the LiquidMatrix site. This breach went undiscovered for 6 months:

“…when administrators performing routine maintenance came across an ‘anomaly’ in the system and found taunting messages that had been posted three days earlier…”

In other words: Some admin was on the box for other reasons and happened to find the messages. “Hello, what’s this? Oh crap…” In other words: sheer fucking luck. (Or bad luck, if you look at the 6 months it took to see issues…)

We need to continue to push for 3 things:

1. Better detection tools. I consider this the first and least important item of these three. Partly because blaming tools is like blaming someone else for your issues. “Well the tool sucks so it’s not my fault!” That’s an irresponsible knee-jerk reaction. Yes, the tools need to get better and become more efficient, accessible, and smarter.

2. Using the tools. Running tools to gather data, or hell even make decisions on data, still need humans to monitor them! Collecting logs that never get looked at is nearly as bad as never collecting the logs. Collecting logs that a system generates decision points against and issues alerts that are never responded to is in the same boat. Throwing in tools but not having administrators tune, watch, and respond is silly. I would also include properly using the tools in this category, especially when it comes to administrative decisions. Were do you put your IDS sensors? Where do you put your Tripwires? What files/folders/systems do you monitor? What are your tuning standards and written response policies? Is there any consistency to your investigations?

3. Get the staff. I’m not going to be one of those people who think tools should be perfect. I think it is perfectly fine for an admin poking around a server to be the discoverer of an incident that slipped through some cracks. What I don’t think is perfectly fine is this action taking 6 months. Would a tool have seen ‘taunting messages’ on a server? It might have noticed new files, but would never be able to read those files and deduce their intent. I firmly believe a human needs to poke around and “feel out” anomalies when possible. If an analyst has few alerts on his table, he should be empowered and encouraged to poke around, scan some file servers for recently updated files, run surprise audits on access levels, etc.