If it weren’t for the blogs I follow, I’d miss tidbits of news as the weeks roll past. Like this update to an “old” Wal-Mart breach that occurred back in 2006. (This is what I remind myself when I repost rehashed things…just in case I want the links later on or someone who reads mine didn’t see it elsewhere.)
I’m pulling out nuggets that struck a chord with me. Yes, I’m cynical!
Wal-Mart uncovered the breach in November 2006, after a fortuitous server crash led administrators to a password-cracking tool that had been surreptitiously installed on one of its servers. Wal-Mart’s initial probe traced the intrusion to a compromised VPN account…
First, I’m not surprised that the breach was discovered by accidental (or 3rd party) means. This probably happens 90%+ of the time (my own figure, and I think I’m lowballing it!). Second, it is quite well known that VPN connections are an issue. I don’t want to take the time to look it up, but I recall distinctly reading from numerous places that remote employees have a tendency to feel more brazen with stealing information, and, like in the case of Wal-Mart, run on less secure systems with less secure practices and yet connect directly into sensitive corporate networks. Basically, VPN (remote) access is not to be taken lightly. If someone can subvert that one single login, your entire organization could start falling down. (Think how bad it would be if an IT admin logged into the VPN from his home machine which was infected with a keylogger. Hello admin login!
Wal-Mart says it was in the process of dramatically improving the security of its transaction data…
“Wal-Mart … really made every effort to…
Security doesn’t give a shit about talk. You’re either doing it or you’re not. That’s why verifying this talk as done is driving the industry. It also illustrates a huge problem (that affects more than just security) when management has a reality/belief gap between what they think is going on and what is really going on.
Strickland says the company took the [PCI-driven] report to heart and “put a massive amount of energy and expertise” into addressing the risks to customer data, and became certified as PCI-compliant in August 2006 by VeriSign.
I’m not about to wave around that a PCI-compliant firm had a data breach. In this case, no PCI-related data was actually divulged. But…this breach could have led down the road of revealing POS source codes, flows, and infrastructure such that those defenses could have been broken. Basically: chasing PCI compliance is not the same as chasing proper security for your organization. It’s a small slice and sample of what you should have in mind when you think corporate security. For instance, many orgs spend a lot of resources to limit the PCI compliance scope, rather than tackle the security of those things, they end up argued to be out of scope. Reminds me of shoving my toys under my bed and calling my room clean. Out of sight, right?
I think this also underscores that absolute need for organizations of sufficient size to have a dedicated security team that has high influence on all of IT. It’s not just about detection mechanisms and watching dashboards, especially if the network/server teams place them in bad positions or don’t feed them proper flows. You can’t just watch, but you have to poke and probe and continuously test your own systems and architecture for holes. And not just by an annual pen-testing team, but by people who have vested interests in and deep knowledge of the organization and its innards. You can’t find out your IDS, firewalls, logs, and patching efforts are “inconsistent” after a real breach. If you need to, role-play security incidents just like the business demands role-playing disaster recovery plans.