erratasec notes on thunderbolt and ssd memory wiping

Robert Graham has 2 excellent posts going up. First, be aware of the ports on your laptops/devices, specifically the new Thunderbolt technology from Apple/Intel. Yup, this brings back memories of Firewire issues! Second, be aware of SSD disk drives and how you might not be properly wiping such flash memory unless you’re careful.

Essentially, take care of your device ports and shore up your SSD drive destruction policies and practices.

security needs to change? wait…change what?

I’m still ornery today. I’m not sure what it is; I think it’s just this lingering tail end of a cold I’ve been stuck with for the last 2-3 weeks…
I’ve been sitting on this post from Dave Shackleford for a few days, letting it digest and ferment…errr I mean sink in and blossom. Dave talks about a few topics and I wanted to pull it apart like unwinding some Twizzlers. He talks about post-RSA thoughts, business alignment, change, worshipping exploit-finders, and the echo chamber.

As with the post I discussed yesterday, I just want to preface that I agree with Dave. This isn’t meant to be argumentative or critical; rather building and fleshing…

Post-RSA thoughts? I think it is fine to desire security companies who do have passion for their job, but yes, point taken that there are still plenty of companies who are only chasing profits. (As a corollary, how many security nuts want to go into sales? And thus, how many sales and marketing people are security nuts? Yeah, that’s the gap. It won’t change.)

Aligning with business. I don’t think when someone says that security needs to integrate with business that they’re meaning you need to figure out how *other* businesses work and accept that they’re in it to make money. Maybe that’s a given to me? Who knows, maybe there are still people who come into security all idealistic and think every vendor is out there to help them with their security and offer only solid, value-driven solutions. Well…it will only take them about a year to realize they’ve sometimes been sold lemons and sometimes they’ve been sold tools that can’t manage with their budgets.

Change. I agree. “Change what?” I’ll say that sometimes this is the right approach. If I’m not happy, the solution is two-pronged: change and to figure out what to change to make me happy. In the case for security, I don’t think we know the answer to either prong of that solution. We don’t know what to change and we don’t know what changes will improve anything. So why do we say change? Because we’re not perfect? Because we’re still behind the curve of security? I’d argue that’s exactly where we will always be by nature of the beast! Sometimes you’ll be unhappy if you’ve set unrealistic, maybe even impossible goals for yourself. In that case, you need to redefined your happy state. Or redefined what “security” means to you.

Worshipping exploit finders (aka the adversary motivation). This is complicated, and I both agree with Dave and think there is simply more to it. First, I think our focus on exploit-writers and breaking into things is deep-rooted, probably something to do with competition. This may be a people thing or even a national/socio-economic thing (capitalism==competition). Second, we’ll all become better defenders if we had more skill/knowledge as/of attackers. How best would person a secure their OS/apps? If they knew how to break them. Maybe not in a way you can do it while sitting in a club getting a blow, but at least know that it’s possible.

In the end, I do agree, however. For as much fun as it is to break things, we continue to need to focus on the fun of securing things and thwarting attackers. We need rockstar defenders as well as attackers. (At least there are many attackers breaking things for the greater good as white hats; we do still need that segment.)

Echo chamber (aka evangelize). This is a tough call. I agree we need to get out of our comfort level, but this is a bigger bill than one would expect. On one hand you can talk to technical people, but if you’re going to talk to them about security, you need to talk on their level and give them actionable information. Not just point at OWASP top 10 and hit the bullet points, but give examples of insecure coding and ways to actually do it. Otherwise you’re just a burden; another requirement-giver causing them more work and telling them their babies are ugly. You have to actually teach, which is still hard for many security persons to adequately do. On the other hand, you have a crowd of non-technical people who need to know why they should even bother; and they often need a heavy dose of FUD to get the point. But even here, expectations need to be tempered or we’ll always be an unfulfilled bunch. My age-old example of home security hits home here (huk huk!): it’s easy to scare people, people know they need to do it, yet so many homes are just waiting for theft/invasion. You’ll also need to be able to deftly, and understandably field dumb questions and deflect misguided assumptions while keeping mind that not everyone is as paranoid as a security geek and not everyone puts the same value on their personal information as a security geek does.

Biggest point: We security geeks rant and rave and we *need* to. We *need* to talk to each other to share ideas, but we also need to share our pains and stresses and cathartically release them together. And we *need* to keep talking to others outside those groups. This is where consultants really, really need to bring their game. Charlatans in it for the paycheck need not apply.

Last point: We’re often at the end of the stick, just like IT operations. We’re at the mercy of attackers, users, software, business, and vendors giving us crappy security products filled with half-false promises. Getting to the forefront of this probably means embracing risky, edgy concepts like “there is no perimeter” and doing things so dramatically different… Maybe. That’s just me high on tea this morning…

0day anger? naked security? prevention jab?

Go read this post from John Strand over at pauldotcom talking about the latest Microsoft 0day:

You have to ask yourself, if someone wanted to target you, how successful could they be? What’s stopping them from getting your users to click on a link or open an attachment? What stops your users from accessing SMB on your servers? How do your servers defend against a 0day attack?

I have several issues with the particular post, but maybe I’m just killing time at the end of the day being ornery…warning: this is a bit ranty/rambling/disorganized, even for my tastes.

1. Just to start, I agree with the post and position in general. There’s really nothing new or wrong here. It’s a great starting point to a discussion/thought-exercise on security paradigms/posture. Security geeks should *always* think this way.

2. It’s just that: a starting point. I liked this post at first hoping it would go into some ideas about this issue. But there’s really nothing other than the point that prevention eventually fails, therefore detection/response are important. (I’ll get back to this in a few points…) Well, if IDS is evaded, what other detections are we talking about?

3. Also, I take issue with the possible tone of the last line: “No friends, it has nothing to do with prevention anymore. It is now a questions of containment and detection.” Is that in the context of the hypothetical? I hope so, because this month’s 0day shouldn’t be the catalyst for such a position (1996’s 0day should have been) nor should we just throw our hands up about prevention just because 0days exist.

4. The problem, in business anyway, with the “prevention eventually fails so get with the detection and containment” idea is it’s only a vague concept. It sucks to get budget dollars on something that doesn’t actually empirically exist. But, I’ll defer to all risk experts out there who do just that… Yes, it’s important, it’s just more difficult to explain that to a layperson; it’s difficult for them to grasp the concept that an attack *will* be successful at some point. Even in security ranks this is spoken but not always truly followed upon.

5. Every time a new 0day comes out, there are sets of people who start wailing about how you can’t protect against unknown attacks leveraging existing holes in software. Well…that’s not a new proposition and has always and should always have been part of a security mindset. Every single piece of software, hardware, and protocol we run right now probably has a weakness we don’t know about. Hell, we should *assume* as such…at least as much as we can do anything about.

6. Before anyone gets too uptight about a currently known 0day issue, we really have to dive into the issue and what is really at risk. In this case, yes, someone can *possibly* run remote code on a domain controller. How do you do that? Fine, you can trick a user to hit a website and get their machine owned via some other exploit, which may then act as either a call-back zombie or just itself launch this new 0day attack against whatever domain it belongs to. Let’s assume said attack can run remote code and then own the domain controller. (Wormable? Only on a small scale, i.e. within trusted domains. Unless this can reliably attack the services on regular Windows machines or blend attacks [ala stuxnet]…then we’ll have to scrub all of this!)

7. Well, what next? Someone might talk about VLANs and firewalls and segmentation, but those are largely out the window when you talk about owning a box that, in a Windows environment, needs access everywhere. You can make sure your domain controllers can’t talk to any untrusted networks at all, for starters. Why let a DC call home to an attacker? I would hope proper log management and file integrity monitoring would help (for those few that actually do that passably well!) raise the alarm quickly.

8. Once an issue is known, we do start seeing signature definitions get pushed out by various AV/AM/IDS/IPS vendors. This is a start, though I’ll admit some would be stymied by even small changes in the payloads, especially for POC exploits. Yes, you can evade defenses, but how often are they *really* done in a way that isn’t a gimme. (A “gimme” is using SSL/TLS over 443 to deliver it…I mean, come on, I don’t consider that to be an *interesting* evasion of IDS. To me, evasion is walking past the security guard while he’s looking right at you, not walking in a side door he can’t even see.)

9. To build on the previous point, it is useful as a thought exercise to think about life without AV and IDS and patches and then to justify that by saying AV is weak, IDS is evadable, and patches are often not done. But that shouldn’t mean to anyone that there is no value in any of the above 3, or no need to pursue them. We need to make sure laypeople know that us security geeks are realistic, but paranoid. Trust, but verify.

10. So now we’re back to endpoint security and various ways to protect the endpoint including reduced rights, web proxies/filters, egress monitoring, education, etc. Or maybe even just data-centric security (which cloud and virtualization/consumerization of enterprise IT will tell you is now nigh impossible). In chess, how do you protect your king? You protect him with pawns, and you protect those pawns with other pieces. Now we’re starting to think in layers…

11. In the end, while I do have some small issues with the post (would have liked twice the post with some follow-up answers/solutions/theories), I do absolutely agree with the spirit of it!

don’t only blame the techs for insecurity

Whoa, is there a devil’s advocate flying around here today?

Is it easier to accept user input and then consume it (either plug it into a SQL query or echo back to screen somehow…), or to accept user input, validate it securely, and then consume it? The difference in effort/time/knowledge is a reason why we’re still seeing massively insecure systems.

That effort/time/knowledge cost is something far too many businesses don’t really value. It increases budgets and pushes deadlines. Why spend extra resources and then get your product out…versus just getting your product out? You’re going to have egg on your face if you have a security breach, but you’re going to have egg on your face if you spent cost*2 for something that ended up not working out (as a product, process, etc).

This is the conundrum… And you can see it any time a technical person is appraised based not on the quality of their work, but on their delivery times and customer satisfaction. Both of which are helped by cutting corners, bending rules, or taking shortcuts.

This is why all of this is a balancing act. We just need to keep adding security where we can, adding input when asked, and pooping out as much quality (real value) as we can without sacrificing ourselves on the business profit altars.

be aware of today’s hacker ethic

More news, this time from Forbes, on the storm roiling around HBGary Federal.

I don’t outright condone hacking incidents like this, but as has been said elsewhere, it is hard to feel sorry for someone who has had their closets turned inside out for skeletons…and indeed many (fresh) skeletons are found. It is also hard to exonerate the attackers because any target they attack likely has similar skeletons…or if they don’t, then the damage done in finding that out does everyone a disservice.

In our world today, the “hacker ethic” that information (and secrets) tend towards being public needs to be remembered by business leaders. Yes, there are secrets to be kept (an arguable point I won’t argue), but keep in mind that you really, really, really have to be conscious of keeping those secrets secret in today’s world. (This can dive into why pen testing isn’t and won’t be “dying” any time soon…)

No, I’m not ignorant enough to think that these sorts of business dealings, coercion, and borderline shady approaches don’t happen on a regular basis with most individuals and corporations. But as much as possible I believe people should act with some degree of integrity and respect, including corporations.

I have this weird internal compass that has sympathy towards things like the “hacker ethic” as well as aspects of Randian objectivism… It all makes sense in *my* head anyway!

iphone keychain/password attack preaches device awareness

Researchers have figured out a way to recover some passwords from iPhone/iPad devices in 6 minutes (video and pdf links are in the article). Obviously this is yet another excuse to preach about not losing your devices and reporting lost devices so accounts can be disabled and/or passwords changed.

But there’s more…think about this. Your VP of Whatever is on a business trip to China. He unplugs for a bit and heads to the exercise room of the hotel, leaving his iPhone in his room. Someone enters his room and will have unfettered physical access to his device for x minutes. And you won’t even know it. And don’t for a minute think this doesn’t happen. Maybe the VP will just think his iPhone is broken and exchange it…

In other words, always know where your devices are, even when they are switched off or locked. This should be obvious, but I don’t think non-paranoid people have been often told this.

passwords shared between rootkit.com and gawker

The Register posted a story comparing passwords disclosed from rootkit.com and Gawker, which suggests a problem with password reuse.

This is a classic journo case of an editor-sensationalized title for an article that doesn’t really get reasonable until the last two paragraphs where it kinda puts the brakes on calling password reuse “endemic.”

Gawker is a celebrity gossip site. Rootkit.com had a forum. As a security conscious person, would even *I* use the same password for both sites? Actually, I likely would. Gawker would be exceedingly low value to me, if I had an account there, and a php-based forum would be exceedingly risky to me. I *might* actually use a crap password for a forum like that, but I’d call that a flip of the coin depending on my mood the day I make those accounts.

Does this mean we should start running around screaming about endemic reuse of passwords? No, though we should encourage people to not reuse them anyway, but this research really doesn’t say all that much.

people cause insecurity, and also influence risk

I sometimes shy away from the obvious big news that everyone is already talking about, but finally I read a decent enough article about the recent HBGray Federal drama, over at ars technica, of all places. That and the Krebs piece (whom I’ll just unofficially credit as breaking the news, when I saw his comments on Twitter as the Super Bowl was starting…) are all you really need.

Anonymous got into HBGary Federal’s e-mail server, for which Barr was the admin, and compromised it, extracting over 40,000 e-mails and putting them up on The Pirate Bay, all after watching his communications for 30 hours, undetected. In an after-action IRC chat, Anonymous members bragged about how they had gone even further, deleting 1TB of HBGary backup data.

I’ll be the first to say that *everyone* is weak somewhere, even security firms, and it is difficult to always find attacks (through automated or manual means). Nonetheless, you need to be better at your damn security. Yes, a dedicated group of attackers can give anyone hell over x period of time, but you shouldn’t fall within days, not be able to detect it for 30 hours, and so on. And you surely as hell shouldn’t be so arrogent to expose such ignorance. The entire organization should have known that this guy was about to prick a group that would, in and of itself, be a major risk agent/threat, and have acted appropriately.

On the other hand, one of my favorite quotes: “A smooth sea never made a skillful mariner,” can be equated to an IT mantra, “we learn the most when we’re troubleshooting critical issues.” While one or a few of the major players, and maybe even this branch of the company, may end up flaming out because of this, hopefully the other bit players and techs and businesspersons will learn a valuable lesson and take some extra experience away from this.

As far as the major players go, it’s hard to feel sorry for either Aaron Barr or another recent “victim” Mr. Evans, when they’re essentially un-empathizable. It’s like the dumbass in the bar who keeps daring you to hit him and keeps barking loudly and making a scene all night, then starts crying when you do hit him and break his jaw. Rather than take moral sides in these situations, I’d just like say, “Welcome to 2011.”

oh that silly hoover dam fud example

I’m not sure if I should laugh, cry, or just facepalm in regards to recent use of the Hoover Dam as part of the US internet “kill switch” debate.

“The bill, one aide said, would give the president the power to force “the system that controls the floodgates to the Hoover Dam” to cut its connection to the net if the government detected an imminent cyberattack.”

I’ll not pick on everything that is wrong here, but I will say that if we’re going to be so concerned about systems that are supposedly connected to the Internet, so much so that we will have provisions to close those connections if necessary (which presumedly won’t itself break anything)….then why the hell is the connection there in the first place? My guess is people are assuming such lax security without actually verifying that there really are layers involved. The risks of insider employees (or mistakes) is still greater…

If there’s one single thing we can learn about security today compared to 30 years ago, it is a matter of increased scale and speed (efficiency). Sure, it’s nice to keep the museum doors open for visitors and staff and then lock them in a crisis, but digital networks operate far faster than any one person can react and with such efficiency that damage is done before some “switch” can even be triggered.

jeff snyder on web app security job skills

Been developing web apps for a while and want to move to web app security? There’s room for you! Check out Jeff Snyder’s recent post about Hot Security Skills: Web Application Security [warning: may come up as a job recruitment site on web filters].

I really like that he dives down into what I think is important in most roles of security: practical experience. In this case, employers want experienced coders/developers. Diving deeper, you can see they would also like candidates to have experience with security scanning tools and web app firewalls. I’d argue those are a bit harder to get ones hands on, as some of them are a bit spendy depending on the vendor. But I bet you can get some hands-on if you just ask the vendors and explain you’re trying to improve your skillset and might actually end up making indirect sales with recommends (hint hint)…

Now, if you look at everything Jeff lists, you’ll probably see why there is a shortage of web app security engineers! Those requirements are pretty damn high, even for experienced people, and they start diving in other areas that may be less familiar (database administration, WAF, advanced authentication, various server administration…). If you have all these skills, just sticking to development will be solid bucks, let alone bothering with security! I consider it rare that a developer really understands or ever tackles these other things, some of which are often in the sysadmin ballpark.

Nonetheless, don’t let such high requirements chase you or someone you know away from web app security. There are no doubt opportunities for less experienced gigs and it’s really only those first 5 years of job experience that are the hardest, whether you’re doing practical work or outright security work. If you know your security shit, you can probably bypass the “I was a Ruby developer for 15 years [huh?]” requirement.