Nice article by Wired on privacy defender Chris Soghoian.. This also acts as a great illustration of why disclosure matters. (The potential irony of public disclosures to keep information private is not quite lost on me…) 🙂
I’m not sure Starks made any assertions that are challengable; in the third-to-last paragraph he poses the important questions, but really isn’t taking a stance on them. Essentially, when a vulnerability can directly threaten a life/lives, then extra care should be taken during the disclosure process. There’s really nothing to argue there. The stakes of this discussion can be raised pretty easily, though.
1. Whether disclosed or not, the vulnerability is still there. If a vulnerability is found, there should be extra care taken on the vendor’s part to fix the issue. The heaviest weight of action and responsibility should be on the vendor. And just because something isn’t disclosed doesn’t mean someone else won’t find and disclose it tomorrow, or some actor is already adding it to their attack arsenal. (Things like medical devices really sounds like a great place to spend nation-state intelligence agency research dollars into, rather than private persons.)
2. Pray tell how exactly all the devices will be updated with any subsequent fixes to a vulnerability? I’m not sure there’s an answer here, and it certainly isn’t a problem unique to medical devices (ATMs?). And the easy answer is to put it on the Smart Gri…I mean, Internet-connected network. Which of course opens a whole new host of issues. Still, even if a vendor develops a fix, are they *ever* going to go public enough to our (infosec) satisfaction? The question of going public with any details at all should be a central discussion.
This really is a huge discussion, for instance the general public doesn’t deal properly with security scares. What exactly would any reasonable person with a pacemaker do when told their device has a security hole that could kill them? Ditch their vendor of course! But that’s not necessarily the correct answer. If the vendor handles it well, wouldn’t that mean they just learned a valuable lesson internally that may help prevent similar issues?
3. What if the vendor does nothing? There’s another big window here where the vendor does nothing or feigns ignorance about an issue. It really shouldn’t happen, especially considering the bad press that will result, but it is still a fact of life for researchers. Should the researcher go public enough to elicit action?
4. Can we draw parallels with product recalls that put lives at risk? You know, cars, baby strollers, laptop batteries, children’s toys… Maybe the security of various locks that are tested in the locksport community? I’ll just throw that out there for now.
5. Is it better inform the public about an issue, or hide the issue from potential attackers?
If asked my really generic opinion, I’d still side with the idea that information wants to be free, and it eventually will be. That doesn’t mean disclosure of issues must happen right away, but any issues found need to be dealt with and eventually divulged to the public with proper recognition to the involved researchers. And there should always be heavy emphasis on security in development, and vendor acceptance of public security research and assistance. I know it screws with your bottomlines and it means an unplanned project for your devs to fix, but that’s life in technology.
In the end, I’m a bit cynical that if a company can supress information, it always absolutely will. It’s a self-preservation and natural defensive reaction.
Via the PCI Guru blog I see quite a few Lucky Supermarket stores have been hit with a wave of card skimmers attached (somehow, no one is clear on the details) to the self check-out devices. The PCI Guru author lists quite a lot of good detail about physical controls around devices like these and gas pumps, specifically about access to the innards of these devices.
When my bank, or more specifically, the ATM that I use most of the time, was updated with a new model in the last few years, I was extremely distraught at the garish, plasticky, piecemeal face to the device. In the past, my ATMs-of-choice have all been pretty solid panel with entirely matching fronts and pretty predictable angles and lines. I should snap a picture of this new one, because it’s awful, has various colors and textures, and upon quick inspection would look like lots of pieces were added on after the fact. It makes me feel dirty every time I use it.
It can’t be used as a PCI control, but there should be weight given to the ability for customers (and any employee) to be able to visually see that something is immediately wrong or out of place. Even the DBIR reveals that weight should be given to being able to detect through suspicion that something is wrong, based on how often 3rd parties notify a breached org or how often someone just notices something weird or out of place. However, this is all defeated if an attacker can get to the innards of the device and place their gear there, out of sight of everyone.
Similar credence can be given to hardware keyloggers attached to computers. It’s one thing to detect them digitally, but how often do important users check the back of their systems?
Via the New School (yes it’s awkward!) is a link to an article by Jay Jacobs: A Call to Arms: It is Time to Learn Like Experts. The pdf article is a bit of a heavy read, so skip ahead and read the meat starting at the section titled, “Information security is a wicked environment.” The article is excellent and has great points.
The short of it is, information security defenders don’t get very good or timely feedback to learn from. This results in lots of opinions from possibly faulty intuition. (Yeah, I’m grossly paraphrasing.) It really helps to ask the question, “How would I know if I am wrong?”
I like that question, and it’s something in defense that you can tackle which will result in some relatively quick feedback. It’s not dissimilar to testing your systems before and after a change to ensure whatever you’ve done is working as needed.
My only fear is the paper seems like it has an undertone that if only we learned properly and had effective feedback loops, we’d find the right answer to security. I’m not the biggest fan of thinking there is any right answer. I guess I look at it from the security side and would suggest no one in many centuries has solved the security propblem. (At least in general, there are plenty of resultant secure situations, but they’re situational.) I think this works nicely for the given example of developers and code security, but less so for more broad topics.
I guess I do also fear that rather than move on and say something like, “Patch systems,” we instead argue semantics and values and analysis of that suggestion for years, rather than just Getting It Done. My gut is still usually pretty accurate in determining whether someone knows what they’re saying/suggesting when it comes to security, but not because I spent excessive time analyzing them.
Let’s ignore the constant threat to privacy with today’s mobile devices (even the base Android platform I really don’t heavily trust, from a company whose profits rely on advertising/consumer intelligence).
I really believe there are some clear parallels between today’s smartphone platforms and yesteryear’s AOL. Both are really trying to make usage of a technology accessible and controlled, to an extent. Essentially, a controlled, walled-garden experience. But this means there are people in between you and what you want to do (carriers, the OS, apps), and they can do whatever you allow them to do, which is a lot. Since this technology is so new and complex, all three of those tiers pretty much do what they want without most people having any idea what they could even potentially do, let alone actually do.
Of course, in some clear ways, this experience is actually worse than AOL. At least with something like AOL, they had a controlled experience for most users. Smartphones (and even tablets) even have an unlimited range of apps and settings and junk on them. I imagine someday soon this will just be too frustrating an experience (much like PCs, only with less control) to sustain today’s manifestation of the market. The usefulness is being met, but there’s a lot of growing frustration with a small computer in your pocket.
The future may be further control, to be honest, but we’ll see!
SCADA attacking is big news right now. And this anonymous article over on Rafal Los’ blog, “The War Over SCADA – An Insider’s Perspective on the Hype and Hyperbole,” is a must read to counteract the rightfully called-out hyperbole in the media.
However! This is not a bulletproof rebuttal to recent SCADA fears. (Disclaimer: I’m not a critical systems/infrastructure or SCADA expert or even amatuer. I may even be using the term SCADA overly broad.) (Disclaimer: I’m the first person to say news media has a bias; they often report what they want, and what they want is to get eyeballs, and drama gets eyeballs…)
First, let’s scattershot the good, and there’s far more good in this article than bad. I really like that the author (let’s just say “he” so I don’t have to play the pronoun game) starts out by putting hacking risks into perspective; truly there is more to worry about with natural events, mechanical, and electrical failure. I doubt anyone would argue that. I also like the discussion about compliance and auditing and outliers and differences between various sectors (water vs electric). This is the sort of stuff most pundits aren’t actually researching into and can’t speak to. So I greatly appreciate this insight.
But, like I said, there are some holes. Nothing that kills the discussion, but certainly a difference in perspective.
1. “To date, cybersecurity issues have had no impact on those metrics in North America.” I’ve not been the victim of a home invasion, but that’s not really an argument that should dictate my behavior. I also don’t like the thought that we can just qualify this forever (e.g. …no impact on metrics in states outside of Illinois…). I don’t like that lack of it happening means there’s no risk, or at least the implication of it. I don’t like this as an argument to stand on in this discussion, at least not heavily. The problem is a reasonable person *can* see risk in this, and isn’t privvy to transparency over security controls. I don’t know how you fix that, however. Most of us reasonable people don’t have the time to examine what transparency there is. We just know network-enabled dashboards + internet access = eventual problem.
2. The first commentor added: “…some of the statements could be interpreted as downplaying the central issue: automation systems our society depends are fragile.” The author does actually somewhat address the need for security controls and reviews and touts the grandness of them, but it just needs to be added that further automation means further issues, and they can be widespread. One probably could point to almost any internal IT process gone haywire as an example. Or more visibly, the Wall Street hit that one day when automated trading systems went “weird.” (It’s Thanksgiving Eve, it’s late, I’m not going to look it up.) I just want there to be consideration to automation and the risks that brings. Not to say that’s bad! The whole purpose (the whole fundamental underlying PURPOSE) of IT is automation.
3. “In none of the other industries do we see the same level of hand-wringing over standards and interoperability as we are seeing in the energy industry”. Well, yes and no. First, yes we security pundits do hand-wring. Over everything. But the energy industry can be an easy target because it’s a public work. It’s not like I’m going to get sued if I call them insecure off the cuff. But enough on this point…
(4. Underlined emphasis is mine.) In this paper, NERC and the U.S. Department of Energy identify three event types that they classified as high risk, but low frequency. These three events are pandemic, geomagnetic disturbance and electromagnetic pulses, and coordinated attack. Coordinated attack in this case was defined as “a concerted, well-planned cyber, physical, or blended attack conducted by an active adversary against multiple points on the system.” The report goes on to say that no such attack has ever been experienced in North America. Run that probability through your risk calculator and see what comes out. This kind of event would be an act of war, and no private utility is able to, or could be expected to, defend against an attack funded by a nation-state. The cost of such defenses could easily double the cost of electricity.
First, I don’t think the only cyberattack is one that is coordinated, well-planned. We need to also discuss Johnny at home Google-hacking your web front-end and guessing a password and being stupid, or Anonymous, or some other group in it for the Lulz. I think too many reports these days are missing that joyriding, opportunistic, even automated-malware-gone-wrong threat space. Second, let’s say that Illinois incident actually did originate in Russia. Was that really an act of war? Come on, this is still called classic espionage or even vandalism (I know, it probably bumps up from vandalism into something more felonious in our legal system). Most of the quote above is a bit obtuse and hyperbolic in itself. Third, though, I do like the statement at the end that really drives home some reality: that more controls would equal more cost. Hyperbole? Probably, but probably lots of truth in it.
I think that hits the major points. I will say again, I really, really like this article and it really must be a mandatory read for anyone hand-wringing and punditting over SCADA. I may argue a few discussion points, but the whole thing is still a valid stance. While we need people calling for more security and we need devil’s advocates, we also need doses of reality to mix in. This is the beauty (and balance and art) of “security” in any situation no matter the scope or budget.
Plagiarism outing in the security industry is on the rise this year, and I’m somewhat happy to see it having a positive effect. I know in the past I’ve run across blog/news feeds that really read fishy and in some cases I’ve looked up article pieces and found them lifted word-for-word from unattributed sources.
I’ll admit, I’ve long had SecTechno in my RSS list, and about every other time that I catch up on that feed, I get that vibe that the content is written by someone else. I’ll even admit to regularly looking up some of that content in Google to see if I get hits. I never did get much, but I see in the last month he’s had his ways righted. Good to see progress and some cleaning up going on in terms of content and plagiarism/attribution. I know of a few other blogs out there that I actually could prove were lifting content without proper citing, but for those I’ve just passively dropped them from my list of links as well as my news feeds. Really, it just deeply annoys me when a blog steals content, but it’s hard to tell if they’re doing so for a profit, doing so as an automated aggregator, or doing so just for that person’s perfornal consumption on a feed that just happened to be public as well. I tend to just be passive and not be a supported of such sites. I can probably guarantee that if I felt this way, passively, about SecTechno, there certainly were others who felt the same and quietly did not respect the author. In light of this outing, that respect can be reforged. 🙂
Hopefully I never fall into that mistake. Plagiarism has been a word I’ve known since middle school and has always been a huge evil. It’s often misunderstood if you didn’t get it drilled properly into your head in school, but essentially if you are passing off an idea (note: this is the tricky part of it) or creation as your own when it is knowingly not your original stuff, that’s usually the spirit of the issue. Hopefully I always remember to link back to any sources I reference (I think I always have).
I make no secret that I am an introvert (INFP in fact), and I absolutely love this article posted via Twitter: “Caring for Your Introvert”. I tend not to consider myself antisocial, I do consider myself “asocial,” in that I don’t *need* social activity, and if given the choice, I’ll often be happiest away from something social. The larger a group, the less inclined I’ll enjoy myself. As the author puts it, social settings are simply tiring for an introvert.
Extroverts are energized by people, and wilt or fade when alone. They often seem bored by themselves, in both senses of the expression. Leave an extrovert alone for two minutes and he will reach for his cell phone. In contrast, after an hour or two of being socially “on,” we introverts need to turn off and recharge… This isn’t antisocial. It isn’t a sign of depression. It does not call for medication. For introverts, to be alone with our thoughts is as restorative as sleeping, as nourishing as eating.
Some articles make my head hurt. Like this one, “The one ring to rule them all,” from the Sydney Morning Herald. Take this opener:
DAVID Vincenzetti isn’t your typical arms dealer. He’s never sold a machinegun, a grenade or a surface-to-air missile. But, make no mistake, he has access to a weapon so powerful it could bring a country to its knees. It’s called RCS – Remote Control System – and it’s a piece of computer software.
Developed by Vincenzetti and a team of former computer hackers, RCS is able to ”invade” a digital device undetected, bypass the most sophisticated electronic defences so far devised and, if the user so desired, disrupt the running of anything from a railway signalling system to a nuclear power station.
Sounds like that box in Sneakers that will crack every encryption ever, right?
Well, when you look at their typograhically-riddled (even language differences notwithstanding) marketing video and the introductory literature, the real story is clearer. This is “simply” another glorified keylogger “agent” that you *have to get installed on a system* and it does the rest. No doubt limited to specific Windows flavors.
But the rub is how you get the agent installed in the first place. I see this as the real challenge, and it’s not of the vein of some super weapon that you digitally point at anything and everything and results in a successful strike, including the implication of SCADA, like the article (and their own marketing) make it sound like. This is the sort of marketing crap that makes people in the military think that a hacking attack can be ordered, carried out, and successful on command.
Sounds like some reporter got a facefull of the kool-aid and mixed it in with an overdose of hyperbole.
Then again, maybe it’s stuff like this that is new to militaries and governments and makes their eyes gleam at the thought of new toys. When in fact it’s not really all that new or novel at all. Makes you wonder if private industry is far ahead in digital security, only less inclined to compromise economic priorities to practice the knowledge….
Google+ came around and I was interested in trying it out. Then Google+ adopted a real name only stance, and I specifically passed on using it. Now Google+ has reneged on their policy, but the time has passed where I even care.
The one exception would be infosec-related groups that may pull me in, like this book club idea. Sounds intriguing!
The book club will certainly work Twitter into the mix as well, but it does bring up the exhaustion one can feel with today’s splintered social landscape. Should I adopt Google+ to chase one or two groups (or groups of people)? Should I chase into LinkedIn, Facebook? Should I chase into each new splinter social media platform? I can’t possibly do that and feel satisfied in my life, ya know? It’s like never enjoying the car you have because you need to keep getting a new one; which ultimately is a hamster wheel race you’ll never win.
Granted, staying in one or two established social platforms isn’t the best either. It is inevitable that someday something newer and cooler will come along, just like Facebook to Friendster or Myspace. And languishing on an old system is much like playing Quake 1 for 20 years and clinging to the past.
I dunno. Anyway, an infosec-related book club on Google+? Definitely a good enough reason put a foot into the service.
I got momentarily excited by an Information Week article on next year’s security spend trends. I was hoping to find out what new techologies might be exciting and interesting, but really it just sticks to vendor ideas and the same old stuff. Application firewalls? I guess.
This article, towards the end, prompted my out-loud thought through Twittter: “Just how relevant will DLP be in 10-15 years?” I understand the desire to know and prevent data loss, but while DLP helps monitor the big and easy holes, I fear it does nothing to assuage the tide of external attacks and actual malicious activity. It helps stop casually negligent insiders and mistakes. But not every mistake. For instance what about all that data that gets “accidentally” posted to web sites on a regular basis? We’re really not talking DLP so much as more rigid manifestations of policy adherence and control and reporting.
I guess this goes back to having a blended defensive posture and DLP being one part of it, but I don’t know how long endpoint DLP can survive when endpoints naturally want more freedom (and battery life and speed and the things that DLP agents/tools take up when on endpoints) and business wants assurances that DLP is poorly marketed to handle.
New to read: H(ackers)2O: Attack on City Water Station Destroys Pump. That article is what it is, and I can’t add too much to it other than the note that vendors need to start looking out. To get into target systems, attackers are now going after the systems and access that vendors hold, rather than just directly at a target. RSA is a prime example from early this year. In other words, you might become a target depending on your clients.
Incidentally, taking a very similar approach to classic espionage. (I’ve found this to be a common parallel with today’s nation-state-oriented attackers, i.e. The APT. We shouldn’t be surprised by any of this.)
It amuses me when a business tentatively moves into the “social media/networking” arena and has really no idea what they’re doing. I’m no expert (unless you count ~20 years socializing online), so I’ll try to keep this to short bullet points on individual ideas. This has a bit of a Twitter slant to it…
I should add my inspiration. I was on a security vendor site and popped into their forums. Which hadn’t been used too much lately. I then ended up on some unrelated vendor’s site as well, and I popped into their forums. There’s something about support forums where I can do things like self-serve, browse other people’s questions and realize, “Hey, that works for me, too,” and post my own questions. Honestly, I love forums, and every time I see one populated with a good social presence from the business, I feel a happy pang of nostalgia (which is sad itself, since I still feel forums are far more effective than any social media whizzbang in the last 10 years). I then checked out their Twitter link on their front page, and was extremely let down by how Not Right it felt. One felt tended to with loving customer service hands, and the other with sanitized Not-With-It marketing gloves.
[Aside: I do have to mention that while I love forums, I have to say that they’re also N-O-T-O-R-I-O-U-S as security black holes. I think Valve/Steam can give the most recent lesson on keeping your public systems secure, segregated, and updated….]
1. If you’re on Twitter and point me to your Twitter account in some fashion, I will judge you if you have only 75 followers. There are many, many accounts out there that look for popularity, and if you follow them, they will automatically follow you back. This artificially pumps up your own numbers. While I will check for and judge you on that, it definitely helps you blend in and look busy to a casual glance.
1.5. Being a security guy, I will also judge you (and I will check!) if most of your followers appear to be your own employees. While honorable, it just makes me think you emailed your own employees to sign up and bloat your numbers. I’ll check how active your followers are (I bet they’re not, at all). And I will also make note how I would social engineer my way into your company with all this spilled knowledge and access to people. Be careful! (I can probably also assume the first few followers are the marketing team and/or people resonsible. With context into what you do, my social engi attempts can be very targeted.)
2. Don’t let marketing own and stifle the social media presence. Too often a business thinks social participation is only a marketing opportunity. That’s wrong. Adopting social media is a way to open communication between you and others on a personal level. It allows people who like (or hate) you to talk with themselves as well! And it allows you to provide customer service and support.
Think of it this way. Social interaction with outside people has been happening far longer than we’ve talked about “social media.” It has happened in things like web forums where users can sign up, ask questions, make comments, and otherwise form a little social community. This is often organic between users and support departments; not marketing.
Marketing too often thinks about marketing opportunities and forgets about the customer support opportunities. And, really, when eliminating the planned marketing stunts out of the equation, I would guess that most Feel Good Marketing Stories don’t come from marketing’s presence with social media, but rather good (surprising!) support offered through such. That’s not the marketing team, that’s the customer support team (which can arguably report to Marketing…). That’s also not part of some brand campaign, but rather day-to-day attentiveness and quality.
3. For the love of all that is pure and good in this world, don’t let your Twitter feed turn into a press release pipeline or megaphone for links to the corporate blog. At least feel like a real person is behind the username. That’s really one of the biggest failures: when the social media presence feels stuffy, artificial, and useless to the sorts of people you’re wanting to follow/like/engage you. In short: be interesting to the people to whom you want to be interesting. Don’t be safe, and square, and otherwise a stick in the mud. Flavor, mistakes, and opinions make us interesting people.
4. Embrace the anonymity of the Internet. Don’t force me to register to submit comments on your blog. Don’t force me to have a Facebook account. Just because I don’t want to share my identity with what is almost certainly going to be your marketing machine, doesn’t make any of my opinions or experiences or needs any less relevent. Never, ever disparage one of the strongest pillars of Internet usage: anonymity. If you do, you immediately sound older than 50 and you shun a significant number of users. Unless you’re Facebook or Google+ where gathering this data is part of your direct revenue-generating business model, don’t do it. If you want to avoid embarassment and Internet trolls, use moderation or people who can handle those types of discussions/situations (those people who are social media experts but would never call themselves such because they were around before the term).
I’ve long been a proponent of sharing information about breaches and insecurity with our peers, so I liked a recent post by Adam over at New School… “Breach disclosure and Moxie’s Convergence.” There are two main takeaways for me.
First, if we don’t disseminate information, we can’t make breakthrus like the one described for Comodo and Moxie. And no one else will learn from the mistakes of others, or triumphs of others post-mistake.
Second, while it is important to “share” information especially amongst our peers (in a possibly controlled environment), it is a step further to actually be able to “publish” that information instead. For instance, it’s one thing to attend Infragard with an actual or just understood NDA in place, but another entirely to let the world know the information and be able to possibly action upon it.
While I will still always say we need to “share” information more, I’ll definitely have to keep in mind that the spectrum of sharing does have different meanings to others. The spectrum would look something like: private–>shared with a few–>shared with quite a few–>published. As long as we can share, it’s good, but it gets better as you move down that spectrum.
You get what you pay for.
Are you paying for a pat on the back and affirmation that you’re, “Doing ok…” with security?
Or are you paying for a critical look at your security with an aim to improve it?
Inspired in part be a Security Balance post.