CSO has another article up with a story of a not-quite-data-breach. I apologize for no attribution since I don’t recall where I got linked to this from.
While this does drive home code reviews and data access control concepts, it also, to me, drives home another aspect.
I fully agree we need to build things securely and correctly the first time and we need code reviews and less willy-nilly development. And while that is all a great goal to keep in mind, I will always concede that it is neither perfectly, humanely, or economically possible to rely on that paradigm. Kinda like saying our endpoints really do need to be secure, but really, will they ever be satisfactorily secure with non-geek users at the helm?
This is why I will always put so much weight back onto the network as a place to detect and monitor everything else. The company in question should have easily been able to notice outgoing data to their vendor from their webservers (1 terabyte in 6 months!).
Now, they may not have been able to know what was going on since it was wrapped in SSL, but I doubt it would take much effort to get between that and decrypt it anyway, depending on how well it was coded to look for valid certs (chances are, not at all). Or at least start digging into the web servers deeper to see what is going on.
But the fact remains that proper network monitoring can detect bad things like this extruding from an enterprise.
(Likewise, proper network controls like firewalls should also be able to notice or log blocked outgoing 80/443 traffic from the webservers. While some apps do end up needing a hole open to a third-party, it should be a pinhole, not a total allowance. But again, we’re ultimately talking network still.)