I didn’t even have to read this article to know where Michael Starks was going with, “When Disclosure Can Kill
“ (pdf article here).
I’m not sure Starks made any assertions that are challengable; in the third-to-last paragraph he poses the important questions, but really isn’t taking a stance on them. Essentially, when a vulnerability can directly threaten a life/lives, then extra care should be taken during the disclosure process. There’s really nothing to argue there. The stakes of this discussion can be raised pretty easily, though.
1. Whether disclosed or not, the vulnerability is still there. If a vulnerability is found, there should be extra care taken on the vendor’s part to fix the issue. The heaviest weight of action and responsibility should be on the vendor. And just because something isn’t disclosed doesn’t mean someone else won’t find and disclose it tomorrow, or some actor is already adding it to their attack arsenal. (Things like medical devices really sounds like a great place to spend nation-state intelligence agency research dollars into, rather than private persons.)
2. Pray tell how exactly all the devices will be updated with any subsequent fixes to a vulnerability? I’m not sure there’s an answer here, and it certainly isn’t a problem unique to medical devices (ATMs?). And the easy answer is to put it on the Smart Gri…I mean, Internet-connected network. Which of course opens a whole new host of issues. Still, even if a vendor develops a fix, are they *ever* going to go public enough to our (infosec) satisfaction? The question of going public with any details at all should be a central discussion.
This really is a huge discussion, for instance the general public doesn’t deal properly with security scares. What exactly would any reasonable person with a pacemaker do when told their device has a security hole that could kill them? Ditch their vendor of course! But that’s not necessarily the correct answer. If the vendor handles it well, wouldn’t that mean they just learned a valuable lesson internally that may help prevent similar issues?
3. What if the vendor does nothing? There’s another big window here where the vendor does nothing or feigns ignorance about an issue. It really shouldn’t happen, especially considering the bad press that will result, but it is still a fact of life for researchers. Should the researcher go public enough to elicit action?
4. Can we draw parallels with product recalls that put lives at risk? You know, cars, baby strollers, laptop batteries, children’s toys… Maybe the security of various locks that are tested in the locksport community? I’ll just throw that out there for now.
5. Is it better inform the public about an issue, or hide the issue from potential attackers?
If asked my really generic opinion, I’d still side with the idea that information wants to be free, and it eventually will be. That doesn’t mean disclosure of issues must happen right away, but any issues found need to be dealt with and eventually divulged to the public with proper recognition to the involved researchers. And there should always be heavy emphasis on security in development, and vendor acceptance of public security research and assistance. I know it screws with your bottomlines and it means an unplanned project for your devs to fix, but that’s life in technology.
In the end, I’m a bit cynical that if a company can supress information, it always absolutely will. It’s a self-preservation and natural defensive reaction.