the discussion on continuous patching

First, go read Rafal Los’ post over at the HP blogs: Continuous patching – is it viable in the enterprise?, along with the comments. I really deeply dislike the commenting system on their platform (even moreso on my Linux desktop, let alone the damn captcha and moderation rules…), so I’ll make reactive comments here, largely because it’s a great discussion.

(Disclaimer: I’m super sensitive to downtime discussions right now, since my company is suddenly super sensitive to it, resulting in more work by my team, more after-hours work by my team, and lots of confusion on how to satisfy “no downtime” mandates with “make progress” expectations. It’s painful in the SMB world where expectations between biz and technology [and even security!] are still in a world of upheaval this decade.)

1. Hindsight is always valuable, but in too many cases with technology risk in business, we’re just going to keep bouncing between contradictory “hindsight lessons,” which result in analysis paralysis. At some point, you just need to buck up and do it and stop playing business politics about it.

2. Patching is “simple” (even though it can be easily over-thought and subjected to analysis paralysis) and everyone can put in their 2 cents, from IT geeks to the lowest users to the highest execs. Yet we can’t even begin to agree on it. Just like so many things in security, we need to stop looking for the “right” answer to the problem. It will always be different. If there was one correct answer, it would hit us all like a truck to the face and discussion would be over. That said, this discussion is still useful.

3. “Patching” in an organization isn’t just about approving patches in WSUS, or even testing them. It might also mean getting them configured in a central management tool like Altiris, or image files and the like. For my SMB, smaller more frequent patching (presumably at on-demand intervals) really sucks and would probably result in only bothering with major upgrade releases.

4. When we’re talking the web world, sure downtime may be minimized as systems are updated, but that doesn’t mean users feel all honky-dory when a “patch” changes their app layout (thanks Google Reader/Gmail, Twitter, etc…). That may not be “downtime” to managers, but may as well be downtime for users. And we may not even be talking yet about developers making constant little changes to web site code, or at least more frequent changes. It’s always fun when frequent changes are made and a problem isn’t found right away to correlate to that last update. It’s also fun when users update their own shit on their systems, leaving a business in an unpredictable desktop state.

5. What is the goal of patching? To fix bugs that my users don’t see and fix security holes that aren’t currently letting in attackers, or roll our new features that my users would like? One of those gets traction, the other does not, when management becomes sensitive to downtime.

6. I like that last comment from Chris Abramson. I dislike the part about bringing up AV signature updates (not a patch process, more of a data update), but I do like the part about baking in stability and separation so that one update doesn’t bring the host part down. And while noble, I echo the sentiments that it takes many years and many resources to even begin to do. Not something that today’s fast-moving business and technology and developers can do, or are willing to do.