hope no one is surprised anymore by endpoint attacks

As if we didn’t already have a huge war going on over the endpoint, a researcher piles onto voice encryption by tackling pre-encryption recording on the endpoint device itself. This sort of is a, “duh,” in my book, as the next step would be to just record the signal as it comes in off the mic (or USB), continuing on to the extreme of listening in proximity to the speaker when they’re doing business at Starbucks.

This did make me wonder whether there are laws around commercial digital voice communication software and equipment that they *must* allow the ability for a government to tap and surveil. And if so… Basically speaking, so many people feel they have this veil of security on when they use something like Skype…and I can pretty much guarantee that if its use is “ok” in China, it has backdoors or methods of eavesdropping. Hell, even companies like Google and others make money off what you do and say and search and send and browse and go…don’t think it hasn’t crossed my mind that Google will want to record everything you say, transcribe it automatically, and index it for ad use!

Hrm, I woke up on the paranoid side of bed…that or a Garbage song was playing on the radio when I woke up! (“I Think I’m Paranoid…”)

hope no one is surprised anymore by digital crime

If you didn’t read Krebs when he wrote for the Washington Post for whatever reason ( I myself was a spotty reader), and you’re into security, you should give him a fully new try at his new personal blog, because it’s good. Consider it part of your A-list. A recent article made me sigh sadly: a bank sues a customer who was a victim of a cyber heist. None of the issues here are new, but collectively they illustrate the frustrations we face in securing a digital world while also dealing with real world culture. The comments after the article are as important as the article itself.

mid-week rambling on being pragmatic with security

If someone in security isn’t yet convinced it is as much an art as it is a science, I’d expect they’ve not done security long enough (or they’ve been lucky to work in a high security environment or focus solely in academic computer science).

For as much as security wants credibility and to make a difference, it dashes our efforts when someone runs into a room waving around an automated vulnerability report and demanding that every (every!) item be fixed or business will be denied. …including such idiotic things like “hiding” http 403 errors because they give away directory presence or a single weak cipher is enabled or something else so low as to be valueless to any attacker. Or at least less value than it costs to mitigate the low issue! It hurts worse when this report-waving person is another “security” dude. To those people, way to sour everyone’s grapes.

I also read a recent post on Bejtlich’s blog as well as the links and comments for the post. Some great thoughts in there.

I’m convinced of a few things…

First, there are very few (if any) correct answers that work on a global or universal or even “just really large” scale. What works for one organization may not work for another, for any countless reasons. We have lots of great ideas, collectively, in security, most of which probably work. The biggest problem is inertia and getting someone to actually devote some time and resources to the cause in the first place.

Second, the only way to combat the crap being passed around is to be an expert in security (in as many veins as possible) and being able to maintain credibility to educate management. This means being pragmatic and yet effective. It means being able to talk to someone and explain why issue #87 is not the Big Deal they’re running around trying to make it be, just because it appeared on an automated scan. This means not making ultimatums over useless low risk issues and actually tackling issues and initiatives that will actually have some value (even if you don’t understand fully how to measure and prove that).

I really think good security geeks know in their gut when something is useful to the cause or not, even if it is hard to actually justify it every time.

pci debate podcast is quote-irific

Check out the ‘Great PCI Security Debate of 2010’ podcast pieces. Part 1 is hosted at CSOOnline. Part 2 is hosted at the Network Security Podcast. Everyone is quote-irific. Everyone has great points and I find myself agreeing with most (but not all) of what every person is saying, which itself indicates the challenges we have in security. It is not about finding the ultimate answer to the universe and everything, but rather still a very subjective view on what you’d think is a very objective discipline (IT).
Josh Corman early on had some great quotes:

“What a strange twist of fate that we now fear the auditor more than the attacker.”

“We’ve reached a level of completely unacceptable and unsustainable cost and complexity.”

And Jack Daniel:

“There are a lot of people just trying to get past [PCI].”

“Their [network admins and systems admins] goal is for the network to work and the systems to work, and that’s what they’re judged on. That means getting PCI out the door.” <--this reminds me of the paradigm difference between security in the trenches and security in the exec rooms. It also reminds me of Rybolov's Infosec Mgmt graphic. It might also exemplify the difference in perspective between macroscopic (global/universal) and microscopic (1 network) security…

thoughts on the google/china incident of 2010

Praetorian Prefect has a video posted demonstrating the Aurora attack against IE6. It also shows how easy Metasploit is to use once you get some experience with it. While nothing new to sec geeks, I think it is mind-boggling to norms who have no idea how slickly you can own a system.

This incident centering around Google has raised tons of discussion. I really can’t add too much more to what has already been said in various corners of the net, but I can at least add my own voice to the cacophony…

First, Google is a large, public company. They, like most any company, will not come out with a declaration like this without a firm economic reason to do so. I think the best response I’ve seen was Moxie’s over on the DailyDave list.

Second, lots of people rightly diss on these companies for probably using IE6 widely. This is an easy argument (just like saying ‘why are you insecure?’ after someone is hacked…), but not one I tend to take too deeply because, quite honestly, it takes time and effort (i.e. MONEY!) to change things in an IT environment. Good point, but don’t bandy this too hard.

Third, stop being surprised that Google has automated systems to dump your data to authorities. Don’t be naive, both about Google and about economic entities.

Fourth, Google uncovered several attacks to something like 30 other large companies. Wait…does that mean all of them didn’t detect the attacks? Pass the whiskey…

Fifth, defense in depth and detection helps. Having operators/analysts keeping their fingers on the pulse of networks and systems helps (or more appropriately properly augments automated tools). Signatures (and automation) do help and have their place, but nothing will be able to interpret suspicious or strange behavior like a human.

Sixth, speaking of defense in depth, we’ve all seen the vectors of initial attack. We’ve all heard rumors about just how deeply that attackers got inside their targets. But who is connecting the dots? Exactly how did owning the clients pivot over to the servers or systems? I’m not saying I don’t believe those rumors, but I am saying it sounds like we still have a non-secure interior. I know security is reactionary in nature and economically-bound, but what the hell?

Sixth, attackers were originally curious and self-serving in a non-financial way. Then they realized they can make money stealing directly from accounts in a very liquid fashion, and a subset who directly utilized CPU cycles collectively. I think now we’re seeing more realization that there is value in information held by corporations; on the level of corporate espionage. This is far less liquid to most people, but to nation-states or other corps… I’m not saying this is cyberwarfare! But less-liquid espionage is the next natural step…should we be surprised that Google reportedly had a team ready to attack the attackers? Shadowrun, anyone?

some scattered links on the google v china affair

If you’re sick of Google v. China, then skip this post. This is just me hoarding a few more links for reference.

Researchers identify command servers behind Google attack.

Google’s internal spy system was Chinese hacker target (which references ComputerWorld:

This [internal spy system used to fulfill warrants] reveals that Google collects information about all of its users all of the time and in a format that enables it to easily had it over to any government agency that orders a search warrant. This is an embarrasing revelation.

recent high profile hacks expose internet explorer 0day

I hadn’t mentioned the Google/China drama because pretty much everyone else has, but new details have emerged on this topic in regards to an IE 0day exploit in the wild. Both Brian (“How-Many-Bloggers-Can-Say-They-Have-Sources?”) Krebs and The H Security have good posts on it, and both link to Microsoft’s new advisory on the problem.

The H Security has an interesting comment:

The advisory states that, while the hole affects versions 6, 7 and 8, the current attacks only appear to have targeted version 6 – which raises a question as to how current the affected companies’ software inventory is.

Indeed. They also mention what is becoming the attack method du jour:

The attackers apparently used the flaw to inject a trojan downloader into compromised computers. The downloader then proceeded to retrieve further modules, including a back door that gave the attackers remote access to the computer, from a server via an SSL-encrypted connection. Links to the crafted web pages were likely sent in emails to selected employees of the targeted firms.

Outbound SSL-encrypted connection. Take that firewall egress filters! This gets back to how prevention eventually fails, and you’re down to relying on your layers of defense to detect the issue and respond appropriately (unless you aggressively whitelist, I guess). And while we often pass off 0days as exotic and not a threat, tell that to the high-profile targets that just got hit. And we all know that once high-profile targets don’t look as juicy anymore, attackers will go after their partners, vendors, providers, contractors, and smaller shops that have far less ability to prevent and detect these attacks.

security metaphysics: when is a vuln a vuln

Reading articles like this one from Krebs regarding a firm to release a slew of previously undisclosed vulnerabilities, stokes a few latent thoughts of mine which I’ve probably expressed quietly on Twitter (or even here and I don’t remember it).

First, it is naive to think the only vulnerabilities that exist are those that are found and popularly disclosed.* There are people who find and sit on their vulns, and I’m not just referring to black hats or gov’t espionage/cyberwarfare players who want to keep their attacks as secret as possible (or their condoned backdoors [coughskypecough]). Even white hat hackers who find a vuln and even responsibly report it may be sitting on a very important finding. Maybe they get fixed, maybe not. Hopefully it does eventually get disclosed. Who knows how many vulns a group like iDefense is sitting on!

Second, any vulnerability found and/or disclosed today, has existed since it was born either in the current version of a product, or when the underlying code was first written. This includes vulns that aren’t even found yet. Tomorrow’s Windows root is a Windows root that may have existed for 8 years. Kind of sobering, that thought.

Third, this is why checklist-styles of vulnerability management are usually backwards-looking; they look for things that are known. Some things like, “turn off service X when not in use,” is a little different, but auditing for patches certainly is backwards-looking. I’m not saying there is no value in audits like that, but they should not be confused with the ability to say a server is secure. It just means we are patched against known issues and taken some steps to mitigate future risk…

I’d chalk the first two up in a list of “security laws” that help define an approach to digital security, right up next to other “laws” like, “You will be breached.” A fundamental baseline of belief, mind you…

* Tangential discussion can break out on this topic by talking about Apple fanboys, or even the fact that Apple positions its Mac product line (OS and devices) as premium products (i.e. they don’t have to price-match, among other characteristics). Is the Mac target demographic the type of demographic that wants to patch every month? Or even admit their product has a flaw?

reinforce the damn cockpit doors

SecurityMonkey has a post regarding new TSA guidelines, along with a video/link demonstrating how a small explosive may be created and hidden that is probably pretty darn undetectable.

He also hits on one of the major things I’ve felt since 9/11: reinforce the damn cockpit doors. *What makes plane bombings so different in scope to bus, subway, or even boat bombings is the ability to take control of the vehicle to do even more damage. Therefore, protect what you can. Safety on a public plane will never be assured, although you can help minimize some obvious things like guns or stupid terrorists.

* This also does include policy and thought on what can be passed to and from the cockpit, especially on long flights on emergencies, either mechanical or medical in nature (pee breaks?), and so on. Basically, you also don’t want pilots to be coerced into opening the door, or have something slipped to them (a sedative through a food tray?) that jeopardizes the operation of the plane because no one else can get in. (Then again, if you drug the pilots, the plane is probably doomed anyway…)

new home for brian krebs news

In case anyone missed it, Brian Krebs (just formerly from the Washington Post’s Security Fix blog) has his own blog opened up that is well worth the bookmark. Krebs’ blog on the Post site has long been one of the few truly useful “mainstream” outlets for security news that I keep in my RSS feeds. In reading his latest posts, I’m excited for his new venue, especially that he is his own editor now, and we don’t only get the cream of the crop as far as news stories, but also the difficult-to-explain-in-a-mainstream-site issues.

In short, we need people like Krebs who can sit quite comfortably between three parties: the technical geeks, the business people who may either act as his sources or his subjects, and the people who make up the journalism entity. By that last part, I mean he knows the ropes about what he can and cannot write, has demonstrated journalistic integrity, and has contacts and knowledge of the laws and protections he may enjoy. We don’t have all that many people like him in the security “blogosphere.”

failure makes us stronger

David Bianco (twitter) ends a year-long break by posting a great piece on “Why your CIRT should fail!” David talks about tackling the natural biases that may form when investigating incidents, specifically by having a diverse team.

I like to remember my days in hard sciences back in college. You didn’t do experiments to necessarily prove every hypothesis you made. A vast majority of your experiments were failures that you learned from. We learn the most from failures (mistakes, being wrong…), failure is inevitable, and failure is often unpredictable.