we still do need more technical disclosure and sharing

I was scanning Chris John Riley’s post, The more things change, the more they stay the same!,” and noticed a Jeremiah Grossman talk mention: “WebApp Security: The Land that Information Security Forgot (Jeremiah Grossman)” which incidentally has some older slides available for a taste of the content.

Yeah, we’ve come a long way and haven’t really gotten very far. But I think every era in security will likely echo the same sentiments.

Nonetheless, glancing at that talk title just rehashed thoughts in my head that not enough security people are technical enough. It’s one thing to throw an Infosec guy into a room of developers and have him spout generalities and vague security concepts (which is just going to turn off the developers and further drive a wedge of passive disrespect), but it’s another one entirely for the Infosec guy to talk and operate on the level of a developer, even to the point of sample code and pointing out real world issues. I think that’s the part that is difficult these days, and it’s not just limited to the web apps. I also think this is why QSAs are poorly positioned, misunderstood, and way too often abused as consultants when they’re really not.

If you know a young person who has technical interest such as building web sites, and also has a budding interest in security, please do what you can to stoke those fires early, before their coding workload and life responsibilities overshadow their other enthusiasms.

trade fair for trojans fascinates me

This stuff is fascinating: a trade fair for (lawful) trojans and (lawful) keyloggers. We hate these things.* We fight against such malware constantly. We prosecute those that breaks laws in such a way. Yet there is a deep need, and clearly “legitimate” money involved in both private and public sectors.

I guess it can at least be one way a kid who finds herself on the wrong side of the black/white hat world and gains skills in malware creation/evasion, can eventually grow into a career doing the same thing for “legitimate” reasons. Certainly beats the untrustful world of unlawful crime.

* As a thought exercise, think about how many things happen on a network at home where a parent watches/controls a child’s experience and compare that to how adults fight against such unwanted spying. Also compare against how similar things happen in a corporate environment to maintain security. I’m not saying these are bad, but it is interesting trying to draw philosophical positions to stand upon when looking at the appropriateness or global utility of various security efforts and practices. Ya know?

krebs articles make my day (busy signals and passwd resets)

Brian Krebs has two excellent articles that made my morning. (Ok, one of them is several weeks old and I just hadn’t read it yet.)

First, “Busy Signal Service Targets Cyberheist Victim,” talks about a new service in the cyber criminal underground that will call a victim over and over to tie up their phone line so that bank calls to verify large money transactions can’t get through adequately.

This illustrates the give and take the security plays with attackers. You want to complete a call to the customer but have been blocked. Essentially, while a nice feature, this isn’t going to be foolproof. Basically, spin again.

Second, “Loopholes in Verified by Visa & [MasterCard] SecureCode.” The hole is essentially a piss-poor method to reset forgotten passwords.

I hate things like this because it illustrates how much lip-service is put into security until you get concerned consumers or other entity asking public questions or slapping proverbial wrists. This is why I so heavily value disclosure, transparency, and public assistance. It might also illustrate the lack of critical thinking in those who contract, design, and implement these solutions.

Then again, attending to forgotten password issues is a bit of an art. This weekend I saw that my usual screenname was taken over at SWTOR.com (Star Wars!). The forgot password function requires that I at least know the email address under the account, and if this was indeed me, I don’t recall what email address I used to sign up. So comes a call in to support. On release weekend. Needless to say, I’m still waiting to see how this goes. 🙂

(Side note: SWTOR.com accounts have the “option” of using 3-5 security questions. These questions are typical questions you see everywhere. Unlike Network Solutions who allows me to answer these questions all identically [but then tell me I can’t do that when on the phone with a rep, despite their system letting me], the SWTOR.com site actually forces them to be different. I don’t understand this. I don’t use these questions as truthful answers but rather as a second password. I don’t want to have to remember 3 more passwords. I don’t have solutions that I like, but I can surmise this current situation of security questions and passwords is more often done wrong than done right.)

speak for the topics, not for yourself

Speaking of conferences and speakers, it really torques me when I see someone wants to talk at a con (or better yet is already accepted) but then laments that they’ve not yet figured out what to talk about. Chances are, I don’t want to go to your talk if that’s the approach. (There are exceptions, such as friendships, entertainment, etc… Ok, fine, there are *very* few exceptions where I’ll see someone regardless of their message, like Adam Savage, but those people are rare and most of us are not them.)

At a con or talk where I want to learn something, I really appreciate people who have a passion to get something specific out there, whether it be something new, some incite into an industry or process I don’t normally get, or whathaveyou. I’ll even sit through people who don’t have strong speaking skills if they have a compelling expertise on the subject. I’ll leave only if their level of expertise is lower than mine and I’m clearly not getting any value (though others may be).

I’m not the most keen on people who are part of the speaking circuit and speak for the sake of speaking, rather than the sake of the topic. And it eats up a slot for someone who may have neat things to say.

(This isn’t about anyone in particular in recent weeks; it’s a general feeling I’ve had for truly many years.)

stepping out of the hacker rut

Rafal Los threw out a nice article this weekend, “Steps to Avoid Mental Stagnation – Or how to re-awake your inner hacker:”

What worries me is when you’ve been working in corporate IT for 10+ years in a single organization or a single organizational profile (education, finance, whatever) and you can’t seem to break free of a specific train of thought.

I have worked in my current position 5.5 years, and I can sympathize with the broad points. In fact, I’m a bit sensitive to it this year in knowing I’m getting behind on the things I don’t have exposure to in my business, or even things that are under the purview of another team member and not myself.

One idea I’d add along with the ideas Rafal adds is to work to carve out some free time. This can either be at work or in your personal life, where you just tinker with some of those things you want to do that are on the topic of security, whether it means participating in PTES, the social network of security, coding some new things, or standing up a better lab to test tools you’ve long put off. At work, I strongly believe that good admins need a significant amount of free time to poke at strange things, learn new things, try stuff out, and stay happy (I’ve seen this talked about with *any* IT discipline and have often heard the number 30% free time thrown out).

don’t turn the echo chamber into an abuse chamber

When someone in the “echo chamber” of security says something about getting the defenders to think more offensively, and then gets a response similar to, “Rather than complaining, maybe you should give us real ideas on how to do that,” it really irritates the crap out of me. That sort of response is antagonistic and even insulting, plus it’s always going to result in a defensive or even offensive response. There are better ways to make the same point without the passive aggression. Especially when you’re not actually disagreeing with the point!

Besides, even when talking in the echo chamber, making these clear statements isn’t a *bad* thing, and it may even need to be heard by one or two audience members.

It really comes down to education, teaching, awareness, and experience if we want to make security more inherent in IT (coding, infrastructure, networking, systems…).

If you want a stable high-availability network, you need someone who can actually do it in the way you want, otherwise your admins will end up learning the mistakes and correct answers on the fly. And it might take years to build that experience. Therefore, you ask experts and get other ideas.

As a systems/network admin on a team of systems/network admins, we do this every single month where we may look at new things but not inherently know the pros and cons and gotchas of the solutions without experience or assistance.

We frustratingly bitch a lot in security, but we need to support each other during our bitch modes, not lash back and kick each other when we’re down. That’s really my point.

would you rather find your own breach or have someone out you?

Look at that, another breach discovered by someone else that is not part of the victim company, this time affecting Dutch telecomm KPN.

…a hacker broke into a Gemnet [KPN subsidiary] database after exploiting poor password policies set up on its PHPMyAdmin server… The article said the hacker came forward to prevent the kind of debacle DigiNotar created, but “he has also found evidence that he is not the first person who have gained access to the systems.”

We hear a lot of these reports of third party notices of breaches. I wish we could correlate that better with how many get detected internally, though I imagine a good chunk of those are never discussed beyond the immediate team involved…

celebrating failure and innovation

I love having Twitter up next to me while I do other things like play Skyrim. I get to see things fly by like the article “Why I Hire People Who Fail,” by Jeff Stibel, passed on by @chrisclymer.

We don’t just encourage risk taking at our offices: we demand failure. If you’re not failing every now and then, you’re probably not advancing. Mistakes are the predecessors to both innovation and success, so it is important to celebrate mistakes as a central component of any culture. This kind of culture can only be created by example — it won’t work if it’s forced or contrived.

About a year ago, the company I work for made an effort to spark innovation. And while I’m sure a few good ideas percolated up to the top, the problem is all the ideas generated are placed into a review group to pick and choose ones to follow, which ultimately leads to only accepting the safe and obvious stuff. That’s really not innovative, and really does nothing to promote risk taking or enable failure, and thus learning.

Take some risks. Fail at things. Be better for it. It’s just like taking the effort to practice so that you get better for the future.

there’s a lot to be upset about with carrier iq issue

(Disclaimer: Putting this out there, but my time at work this afternoon is forcing me to do less re-reading than I’d like. Hopefully I’m not sounding like an unreasonable ass!)

Carrier IQ is a hot topic right now, which itself sort of pisses me off. In the same spirit of what pisses me off, I read the ComputerWorld article, “Carrier IQ is BYOD kiss of death — urgent action required” (via Dan Morrill). Yes, read the article because it at least doesn’t whine about data gathered by carriers, rather that this data is logged and stored on vulnerable devices.

1. If the confirmed presence of Carrier IQ on your phone prompts new (ensconced) action, you’re doing it wrong. Whether this is a business-purchased device or a personal one, it’s not entirely YOUR device. The carrier is going to and is already doing whatever it wants. While it’s nice that people are getting mad now, you shouldn’t be surprised by this state of affairs. Maybe this will spur usage of unlocked phones not supplied by carriers, or custom ROMs, but still…

2. If you’re pissed about carrier-implemented apps, are you pissed about all the crappy apps your users can install on their phones? Again, if not, you’re doing it wrong. And there will be apps with even worse transgressions (if not outright malware apps). In users’ defense, at least they dont have a chance to know about carrier apps.

3. Are you worried about corporate espionage targeting your phones but not your carriers? You’re somewhat doing it wrong. I like that the article mentions the risk of phone-based attacks harvesting extremely juicy data that is brilliantly stored on the end device, but one should also keep in mind that these carriers and anyone else logging anything at all (the carriers absolutely will be, it’s their network) are also risks (that includes Google or Apple, the makers of your OS). Those entities are making your risk decisions for you.

4. Why are you kneejerk reacting to get rid of Carrier IQ software in the “urgent action required” section? This is the same backwards approach to security that says you only react to bad things actually happening right now, instead of doing any prevention. It’s fine to react, but please don’t be surprised or crazed with action after the revelation of something that was predictable and probably expected at some point. And just because you get rid of Carrier IQ, does that mean you also fully understand every other part of your phone’s OS, included software, carrier presence, and installed apps? Shit no.

Is there a difference between malware keyloggers vs carrier-embedded software logging vs OS-enabled logging? In my books, not really, until users are fully made aware of what is going on. Which itself is an entirely new topic because if you’re doing something that will piss people off if it were made known, why the crap are you doing it?

I think Dan is on the money when he says this really doesn’t change anything on the BYOD front and poses the question of whether these phones really are yours or not.

Another discussion topic would be what makes these phones so different in this regard to our Microsoft-clad personal computers running on our ISP of choice? It’s interesting that I do actually trust Microsoft as my OS more than Google or Apple and I trust my interaction with my ISP a bit better than with my phone carrier and I also trust the software process a bit more (i.e. I have the ability to deeply on a technical level watch an install and monitor/alert on behavior). You make everything convenient which hides the details which, to me, fosters less trust…

semi-quarterly skype-in-the-enterprise mention

Skype still beats on the enterprise door with regularity. Brandon Knight talks about Skype in the enterprise over at infosecisland. I’ve talked about it before and before and before and before and before

I like Brandon’s take on the potential eavesdropability risk with Skype (which is almost certainly real, since China allows its use and they certainly never would if it were truly private):

For example, how are you communicating today in your organization? If you are making calls which route across a PSTN (Public Switched Telephone Network) then you are already putting your conversations into the hands of service providers, governments, and whoever else may have physical access to the lines.

Fair enough argument. But this only applies to people who understand that Skype isn’t a private network. I’ve had plenty of discussions where users argue that Skype *is* private. You can’t make that assumption; you’re using someone else’s app, over someone else’s lines, and through someone else’s proxy/login/servers.

This also applies only to the instances given. If I want to eavesdrop on John’s Skype conversations, I can do some network tomfoolery to reroute traffic. Doing that on a PSTN or somesthing else is a whole different game. The name of the game in the digital world is efficiency, which blows away any comparable example in the analog world (just ask the MPAA or RIAA…).

Brandon’s article is an excellent companion to any discussion about Skype in the enterprise, and he brings up decent points about public information disclosure, desktop maintenance, network security visibility (data exfiltration), and even side-channel delivery of content such as the ads accompanying the app.

There are even other considerations, such as how you handle people’s personal accounts upon termination (and contact lists and client/customer contact habits), automatic updates, logging, etc.

illinois water pump hack not so much of a hack

Watching the Illinois water pump hacking situation has been fun. Wired pretty much summed up the end story: no hack here, just a series of fun incidents.

While it makes for a great movie plot, and gets people excited, I’ve found that most “strange” things at work involving computers ends up being completely innocent, and not the effects of some nefarious digital attackers. For as paranoid and ear-to-the-security-ground as I might be, I’m still one of the last people to think an actual attack is under way when something weird happens on my networks. And 98%+ of the time I’m correct. Jumping the gun and throwing cries of, “hackers, hackers, hackers!” without anything solid to go on does no one any good.

It’s one thing to muse about the possibility of an attack or to wildly (or jokingly) suggest it, but doing so outside of very controlled groups of people leads to a misunderstanding as someone walks away from that conversation and tells someone else that it *is* a hacker. And then it gets to someone important, and now you’re spending days, weeks (or more) trying to dig out of that hole and pass the hot potato.

When in doubt, stick with non-extravagent gut feelings. As they say in law enforcement, there may be the possibility of a complex, movie-like conspiracy, but the truth is almost always rooted in the simplest answer. Not some complex plot.

I will say, kudos to finding that Russian (but not the German?) IP address accessing the remote systems. Not so impressed that those IPs can even log in (no idea on the auth mechanism). And just a sigh about not finding those IPs very soon after the fact (i.e. log review, but it’s hard to fault someone for not reviewing logs when it’s a time/money sink 99% of the time and even then it might be missed, besides which maybe they get 240 logins a day, which would suck to browse through, and I don’t know many SIEMs that would be smart enough and easy enough to just tune out anything from your normal systems…seriously, the ideas on how to monitor are easy, but not so much with the tools at hand…yikes, this is a whole discussion in and of itself.)

another step in the discussion of disclosures that can kill

I didn’t even have to read this article to know where Michael Starks was going with, “When Disclosure Can Kill
(pdf article here).

I’m not sure Starks made any assertions that are challengable; in the third-to-last paragraph he poses the important questions, but really isn’t taking a stance on them. Essentially, when a vulnerability can directly threaten a life/lives, then extra care should be taken during the disclosure process. There’s really nothing to argue there. The stakes of this discussion can be raised pretty easily, though.

1. Whether disclosed or not, the vulnerability is still there. If a vulnerability is found, there should be extra care taken on the vendor’s part to fix the issue. The heaviest weight of action and responsibility should be on the vendor. And just because something isn’t disclosed doesn’t mean someone else won’t find and disclose it tomorrow, or some actor is already adding it to their attack arsenal. (Things like medical devices really sounds like a great place to spend nation-state intelligence agency research dollars into, rather than private persons.)

2. Pray tell how exactly all the devices will be updated with any subsequent fixes to a vulnerability? I’m not sure there’s an answer here, and it certainly isn’t a problem unique to medical devices (ATMs?). And the easy answer is to put it on the Smart Gri…I mean, Internet-connected network. Which of course opens a whole new host of issues. Still, even if a vendor develops a fix, are they *ever* going to go public enough to our (infosec) satisfaction? The question of going public with any details at all should be a central discussion.

This really is a huge discussion, for instance the general public doesn’t deal properly with security scares. What exactly would any reasonable person with a pacemaker do when told their device has a security hole that could kill them? Ditch their vendor of course! But that’s not necessarily the correct answer. If the vendor handles it well, wouldn’t that mean they just learned a valuable lesson internally that may help prevent similar issues?

3. What if the vendor does nothing? There’s another big window here where the vendor does nothing or feigns ignorance about an issue. It really shouldn’t happen, especially considering the bad press that will result, but it is still a fact of life for researchers. Should the researcher go public enough to elicit action?

4. Can we draw parallels with product recalls that put lives at risk? You know, cars, baby strollers, laptop batteries, children’s toys… Maybe the security of various locks that are tested in the locksport community? I’ll just throw that out there for now.

5. Is it better inform the public about an issue, or hide the issue from potential attackers?

If asked my really generic opinion, I’d still side with the idea that information wants to be free, and it eventually will be. That doesn’t mean disclosure of issues must happen right away, but any issues found need to be dealt with and eventually divulged to the public with proper recognition to the involved researchers. And there should always be heavy emphasis on security in development, and vendor acceptance of public security research and assistance. I know it screws with your bottomlines and it means an unplanned project for your devs to fix, but that’s life in technology.

In the end, I’m a bit cynical that if a company can supress information, it always absolutely will. It’s a self-preservation and natural defensive reaction.