Jeremiah Grossman has an interesting post that covers 2 neat topics: scalable scanning and WhiteHat’s hardware setup. Cool stuff on the second part. For the first part, I think watching topics like scalable security and scanning would be important for those who think all this IT and more importantly security emphasis these days will lead to further outsourcing of said roles to specialist groups. I’m not an executive or into accounting, but I am not oblivious to the idea that IT/tech/security is not a core competency in most organizations, and instead is a cost center (i.e. not a competitive advantage either). (Yeah, I like dropping terms I actually learned in school now and then…)
Then again, maybe a specific case like Jeremiah’s is a bit strange. I mean, look at how much their hardware (storage) requirements have to increase, and no doubt they need tools and/or people to make sense of the reports, as their scan targets increase. Perhaps desktop scanning software scalability is not the real battleground, but rather how do you do web security scanning quickly and meaningfully (as a sort of macroscopic/meta vantage point)? While admittedly conceding that you can only get x% of the scanning done via automated means.
It (obviously) crossed my mind that another group who may have the use-case for large-scale scans could be attackers. But that may be a bit of a red herring. Do they need to do such huge scans to be successful? No. Even if they did, as demonstrated by Jeremiah, you’d need some serious infrastructure (provided by botnets no doubt) to power the whole thing. The more of that you need, it seems to me the more said attacker would be exposed. Attackers are still far too successful with smaller-scale, smaller-footprint attacks that can be surgically wielded from pinpoint locations that are not hard to expend. Even assuming the worst, I’d doubt attackers would ever need to move above desktop-grade scanners anyway.