finding religion through a life-threatening moment

I’ve said it for years, and it continues to be one of my driving “laws” of security: People/organizations care far more after they’ve been violated. Newest case in point, Google*:

“Google is now particularly paranoid about [security],” Schmidt said during a question-and-answer session… After the company learned that some of its intellectual property was stolen during an attack…it began locking down its systems to a greater degree…

This is another reason I believe in penetration testing. Sure, it doesn’t quite yank one’s pants down, drive a kick to the balls, or incite that same sense of dread as a real event would, but it should strive to come as close to that as possible. It’s not just about popping boxes with an exploit, but rather demonstrating that, “I just stole your super secret plans. I just deleted your directory servers. And backups. This will cost you xyz. And I sold the backdoor to the Ukrainians, but not before I joined all your servers to a Chinese botnet and sold all your client data to your closest competitor.”

Shows like To Catch a Thief and Tiger Team (and that one social engineering/con/pickpocketing show…) did a great job in demonstrating issues and conveying a taste of the, “Oh fuck…” moments.

I understand we tend to learn through experience. From not touching an oven until we’ve been burned to not speeding until we’re pulled over to not wrapping up until you have the herps. But we all have the capability to be informed and not make the mistakes in the first place, or seek help in areas we don’t understand (yes, that costs money…).

I may, however, just be an ass about people who can’t (or don’t) think ahead…

* Google is a tough case to use, honestly. They had everything to gain by outing China, outing IE6, and raising their own, “we’re-just-being-a-good-steward,” stock. Still, they’re not unique.

again, why should an organization disclose security breaches?

DarkReading throws out, Organizations Rarely Report Breaches to Law Enforcement. This is a, “Duh,” moment, but I do like reading the reasons given in the article.

Taking this further, I think data breach disclosure is still a lot like the age-old iceberg analogy. Even despite actual laws requiring it, I would bet all the data breaches we hear about are just the visible top of the iceberg. And there are a whole host of other breaches (both known as well as undiscovered ones) that lurk in a huge steaming pile under our field of view.

I firmly believe that many businesses (if not all of them!) have a first reaction to ask, “Is this public yet? How likely is this to be public?” And then to kneejerk on the side of saying nothing and keeping things hush-hush. Of course, until someone finds out, most likely through third-party fraud detection analysis or the finding of files obviously stolen from that organization. I would actually expect (whether I like it or not) that all companies will stay mum when not given extremely huge incentives to disclose (jail time, extreme fines, jeopardizing of business).

Hell, I would even expect this occurs not just in disclosure to the public or to law enforcement, but internal disclosure as well! Tech finds evidence of attackers, tells manager. And somewhere along the chain up, the message gets quelched for fear of one’s job or a naive misunderstanding of the importance of some incidents.

I wonder how many cases Verizon worked on in their DBIR that should be disclosed, but the host company has opted to stay quiet on….or other security firms. Again, I’d bet it’s a decent number. (Note that I’m not trying to criticize Verizon or security firms who are likely under NDA and certainly have given their strong advice, but rather on organizations making the ultimate decisions about security and disclosure. Props to any sec firm that still makes an effort to distribute as much info as they can [formal or informal] to help the rest of us!)

if security wasn’t hard, everyone would do it

I’ve been feeling firsthand the pain of implementing PCI in an SMB for the past 6-odd months. It’s not all that fun in some regards (implementing on-going security in an environment that doesn’t have the time for those tasks). So I try to read opinions on PCI any time I see some.

In futiley catching up on my RSS feeds backlog, I scoured several nice articles from the PCIGuru: pci for dummies, what is penetration testing, and the purpose of penetration testing.

To paraphrase Tom Hank’s character in ‘A League of Their Own’, “There’s a reason security is hard. If it wasn’t hard, everyone would do it.”

Truth. I think it gets even harder the more you avoid having qualified staff add to your security value. You want to automate everything for the checkboxes? You’ll end up spending more and getting less in return, even if you do fill in the checkboxes.

This could lead into the other two articles about pen testing. I am a proponent of pen testing as a necessary piece to a security plan for various reasons. But I also think one reason vuln assessments and pen testing get blurred is because of the limited engagements that many third-party pen testers get thrown into, in terms of time and scope. Give a tester 2-5 days for a network-only test and you really are forcing them to rely decently on automated tools more akin to vulnerability assessments. Granted, you get a lot more, but you also get a lot more for having qualified internal staff always thinking from an attacker’s perspective, who can also do longer and more frequent pen-testing types of duties.

In short, it just comes back down to my continued, deeply-held belief that security begins and ends with talented staff. Just like your software products, financial audits, and sales efforts begin and end with staff appropriate to their duties.

also protecting personal data over work lines

Just a few days ago I read about and mentioned a recent New Jersey ruling about client-attorney communications and storage in temporary files on a computer.

I failed to delve into the idea that possibly, quite possibly, other controls in an organization may be affected, namely traffic captures and web filtering tools, especially if SSL termination is provided with the latter.

new jersey ruling on email privacy at work

This is the kind of story and court-ruling that makes my head spin. Via DarkReading:

In a ruling that could affect enterprises’ privacy and security practices, the New Jersey Supreme Court last week ruled that an employer can not read email messages sent via a third-party email service provider — even if the emails are accessed during work hours from a company PC.

According to news reports, the ruling upheld the sanctity of attorney-client privilege in electronic communications between a lawyer and a nursing manager at the Loving Care Agency.

After the manager quit and filed a discrimination and harassment lawsuit against the Bergen County home health care company in 2008, Loving Care retrieved the messages from the computer’s hard drive [temporary cache files] and used them in preparing its defense.

I’d suggest checking out the ruling itself [pdf].

Some of this sounds fairly obvious, right? But what really raises questions would be laptop users who take their system home or offsite (i.e. away from the shelter of corporate web filtering) and then use it to connect to personal email accounts. Do employees have a reasonable right to privacy for any artifacts that get stored on the system, especially of a protected nature like attorney-client exchanges or perhaps doctor exchanges. If so, do employers have a duty to take extra care of those systems, any backups made, or images made after a termination? Or during technical troubleshooting and such?

Things like this end up resulting in complex policies, especially those designed to protect both business and individual interests. The same kind of policies that get ignored once they get too complicated…

be aware of windows xp mode and virtual pc weaknesses

Noticed over at Securabit some information on how Microsoft virtualization of Windows XP may re-introduce some previously-thought mitigated vulnerabilities. This includes XP Mode in Windows 7. Historically, some vulnerabilities in XP have been reduced in importance due to other protections such as DEP, SafeSEH, and ASLR. But during some virtualization scenarios, memory outside the normal protected boundaries could still be attacked. In essence, this means in certain situations, an attacker could leverage vulnerabilities and bypass those protections. I’m grossly simplifying this description, so follow the link trail from Securabit to get the details.

Is this a Big Deal? I don’t think so, but it certainly is worth the press/time and is very interesting in concept.

The bottomline for Windows 7 XP Mode is it shouldn’t be relied upon or used terribly extensively. If you need it so an app will work, use it only for that app. Also, this does not mean you can break into the host by popping the guest XP system; this is still just an XP guest issue.

on respecting evil hackers and snipers

It’s a long story how I got there, but I found this gem of a post over at the pcianswers.com blog. The post by cmark relates his training and experiences as a sniper to that of the motivation and skills of a malicious hacker. I admit, in another life I’d love to give the life of a sniper a try. Not because of the stereotypical “lone gun” persona, but rather because of the patience, intelligence, and autonomy required, which does fit my personality. Some brief quotes:

By evaluating the terrain on a map, we could determine the ‘natural lines of drift’ with some accuracy. Many people may not know this but Humans drift toward the path of least resistance. Humans traveling would naturally drift toward these lines. By understanding these drift lines, you can determine where a patrol will move, get ahead of them and intercept their movement.

Sounds familiar in business as well. In fact, I see this daily from normal users to technical workers to offsite workers to highly skilled software developers. They will drift towards least resistance.

As a sniper, I only need one small mistake. I would wait and watch until a unit made a mistake or exposed a vulnerability. To protect against me, a unit had to be nearly perfect. They had to cover all vulnerabilities and make no mistakes. (does this sound familiar?)

Point well made, from personal experience!

So what is the best defense against snipers or hackers? Quit simply, other snipers or hackers. US sniper doctrine states that the best defense against a sniper is another sniper. They possess the same skills, and mentality and can counter the snipers’ actions and operations.

And there’s the discussion-starter!

rootkits in your .net framework

Over a year ago, a paper flew out across Full-Disclosure from Erez Metula talking about .NET rootkits. I promptly lost my notes on it, but after finding and reading up on it, I have to say this is pretty exciting stuff (check the whitepaper, skim the pdf if you want, but it is less detailed). Two take-aways I got from a quick skimming:

1. You can replace .NET .dll files that Microsoft trusts just by deploying to the proper “folder” (bypassing the GAC process). Sure, this requires admin rights, but what then? This isn’t a penetration or priv escalation technique so much as it is a persistance technique.

2. You can do lots of cool shit inside a .dll file, whether you’re subverting the framework or some app that uses ASP.NET on top of the framework.

This brings up a few ideas on how to protect systems that run such code.

a. File integrity monitoring on framework files or files inside the general assembly.

b. Egress monitoring on network perimeters (not necessarily external!) to detect if something is being shipped out (such as with SendToURL or ReverseShell). To an extreme, this could be also done on the server itself so it is only talking to systems it should be talking to, not just network-prohibiting.

c. Do you know what code your developers are writing and executing on your servers? Code reviews and lifecycle integrity… I don’t know enough to speak about what privilege level .NET code is executed under, but I would be willing to bet an interested developer can do whatever he wants on a server that executes his code. This holds true for anyone that has access to the server to install something or run code or get administrative rights.

d. Um, don’t run as an administrator. This applies more to users, as they may visit a web page and allow code to run, which then rootkits their framework. Then again, this isn’t the only reason to stop running as admin while browsing the web.

e. If you can spare the energy, tracking regularly accumulating files/folders may help as well. If an attacker is gathering credentials on the server, they either need to ship them out or store-and-retrieve them. This point helps detect the “store” part.

talk about a security mistake, get booted

Threatpost has an article up about Bob Maley, the Pennsylvania CISO dismissed this past week because he discussed an incident during an RSA panel. I saw this come in through the Twitter stream and my immediate thought was how this is exactly why we have such issues sharing useful information. We make an attempt to share information, and someone boots us hard in the ass. While the booting may be justified by the state’s policy, I would question that someone is not understanding the reasonable intent of the policy.

What’s funny is more people likely know about what happened in Pennsylvania due to this than if they had just slapped him on the wrist or done nothing. Well played, sirs. Maybe that in itself violates the policy…

a flaw in the daemon (the book) system

I’ve recently finished reading Freedom, the sequel to Daemon by Daniel Suarez. I made a longer post which I have yet to clean up and release, but wanted to throw out this idea. (I highly recommend the books!)

I just read a post on isc.sans.org about an SEO poisoning attack. This reminds me of all the efforts to legitimize malicious accounts, sites, and activities. For instance, want to avoid the malware-radars on Twitter? Make a ton of accounts, follow each other, and get a few dozen or hundred randomly posted tweets. You’ll blend right in!

(Tiny, TINY spoiler here for Daemon, but not for Freedom. I’m really not giving anything away that will spoil the plot.)

In Daemon/Freedom, the daemon creates this new system which is based in part on reputation. As users in the daemon’s system, you can vote up or vote down other users based on your interactions with them.

This still suffers from problems of gaming the system, just like we see malware attempting to do today. You get enough “people” going, and you can inflate your scores. Likewise, this breaks down when you don’t have people voting based only on rational reasons and instead vote on popularity or for irrational reasons. Ashton Kutcher was the first person to 1 million followers on Twitter. Would it be appropriate for him to be the most powerful user in any legitimate system that has real world ramifications? Probably not.

some ranting to bring in the week’s end

Seems recently there has been a spate of incidents involving small/medium businesses where malware has opened the doors to fraudulent money withdrawals through bank web sites, or the guessing of credentials/security questions, or the tricking of customer support staff. Krebs has several articles in this topic. Rather than link around, I’m being lazy on a Friday and you’ll just have to take my word that I feel like I’m seeing these stories pop up more often this month.

We’re being taken for a ride through the same convenience that users are wanting. Convenient banking for mom at home is convenient banking for an attacker in Latvia who can get credentials. That, combined with the infancy of many of the authentication mechanisms for online banking, the infancy of security awareness by users (really, don’t do banking from the same system you view porn), and the immaturity of the banking establishment to seemingly do much about it, makes for a volatile environment.

We have a very litigious society, one that is quick to point fingers and shift blame. But we’re unfortunately all in this together. Convenience with money is not any one person’s or group’s fault. In the end, the end user needs to be more educated about computer security and not just throw their hands in the air and blame the bank when their browsing habits led to an issue.

(Then again, it’s still everyone’s fault if they were just browsing ESPN which happened to be pwned with malicious script that silently installed malware through an unpatched IE6 hole that was known about but not fixed or publicly disclosed…)

reading on microsoft subpoena compliance details

Ever wonder what Microsoft stores in their services about you, or how that might be used to aid criminal investigation? Seems an internal document has been floating around that discusses Microsoft’s lobal Criminal Compliance Handbook. Some thoughts…

First, if you live in the US (or China, and others) don’t be naive and think businesses can keep what you do secret, even in the face of a subpoena or government influence. Many of these services and tools (like Skype, AIM, GMail, your cell phone provider, landline phones, ISP, etc) wouldn’t be allowed if there were not ways to intercept or request stored information from them to track down criminals. Simply because of that, you know they have to have some method of easy records requesting or eavesdropping capabilities (like the guy in that secret closet at AT&T!). Don’t get me wrong, I’m not necessarily saying this is a bad thing; I actually do favor having that capability to use for authorized purposes. It’s just really difficult to maintain that ethical level of “authorized.” Lots of people were shocked to hear that Google has a web site to request subpeona materials. I wasn’t shocked they have that capability, although I was a bit shocked that it was just a web portal that was apparently poorly protected.

Second, even if it’s not true in practice, it’s nice to read that Microsoft internally does not want to do things like record IM conversations or store your email after you’ve opted to delete it (or at least they don’t want to provide such to authorities, but I bet that lines up with what THEY want as well). Honestly, I really wouldn’t expect Google to be quite as satisfactory in this regard. It is my impression that they want to record, keep, index, and correlate as much as possible, even things you’ve marked or thought were deleted or not recorded.

Third, transparency should not be scary. Is this doc scary to read? Actually, no it is not. The only thing this leaves is whether all of this really is done in practice, but seeing the doc does nothing to challenge that, in and of itself. A doc that says all this, but in practice they do the opposite and save much of this information in personally-identifiable/correlatable form would be a bad thing. But otherwise, I think everything in this doc is actually somewhat reasonable.

Fourth, just to reiterate, I’d be shocked if Google could even begin to do this same thing.

Picked up from the infosecnews mailing list.

sans top 25 released, and thoughts on procurement contracts

I’m just perusing a DarkReading article that talks about the just-released 2010 CWE/SANS Top 25 Most Dangerous Programming Errors and something about a software procurement security contract (link from 2009, so not sure if this is what was referenced).

Without the benefit of real dialogue/discussion on what the contract is trying to do and what it really means, my kneejerk reaction echoes what Gary McGraw was quoted saying in the DarkReading article(“The liability angle is not the right idea…”). A contract is an extremely heavy-handed way to try to ensure something you can’t ensure (secure). But I guess it does throw a punch to software developers where it hurts the most: money. Still, this isn’t about improving security so much as shifting monetary losses. In other words, the avoidance of those punches where it hurts the most. Should vendors/developers be responsible? Yes. But I also think natural market forces are “better” for this relationship than contract wording. You got hacked through bad software? Stop using that software. You bought bad software? Maybe your procurement *process* was hurried and flawed. Shifting costs…that’s all this really is.

It also has the dangerous possible side-effect of allowing software buyers to blame developers for everything, even improperly using software or nor properly following their own best practices for network security, isolation, and so on. You mean I can blame Microsoft because my Windows XP system was connected directly to the Internet without a firewall/router?

I also would be worried that we just get more violent about disagreements on what is considered a “security issue” or a “bug.” Contracts bring about discussion on semantics and definitions…things that don’t help anyone.

going off on product reviews

Bless his heart, I’m glad Rothman is back and blogging! I really enjoy his opinions and, quite honestly, I think we align up pretty well in our feelings and editorials. It’s like having a security soulmate!

Rothman recently posted a nice opine about product reviews. Honestly, I put most of my value in products based on just 2 things. My own experiences hands-on. And experiences of others who are hands-on and not either hand-picked from the vendor or have any stake whatsoever in pimping one product (vendor “partners”) or not pimping another. Basically, if I know you work as a net admin and you use product A, I’ll ask how you like it and what’s good/bad. And hopefully I get decent answers because if I pick up that I should hate McAfee products, can I tell my boss (and his boss) that it’s because CN hates on them on Exotic Liability’s podcast? I’d like I need to have some real responses, and that often only comes through hands-on with products, either myself or others I can trust.

I would love a venue for real reviews, kinda like HardOCP is to me for computer hardware. However, Mike’s right, I’m not sure there is money in it. I mean, I’m certainly not going to pay for the review results, and I’m not sure these industries have enough players to be properly compared to computer hardware review sites or video game reviews in gaming mags. Most IT product reviews I read in mags and sites are met immediately with skepticism. Are these two in bed with each other? Is that a paid-for ad on page 76 for the same product you’re “objectively” reviewing? Do they mention anything negative at all, or criticisms, or their competition? Hell, I even dismiss articles in Insecure when the author is the CTO…

Then again, half the beauty with HardOCP runs in line with what I value in researching a product: being able to ask questions on a forum to people who have real-world experience with said products. So maybe the real problem is finding a security-specialized community-building forum for discussing products, offtopic junk, and attacks. Yeah, I like the Security Catalyst community, but I really feel like I should be wearing a tie in there and refrain from community-building offtopic posts like, Best Super Bowl commercial. Or things you can bullshit about in IM or IRC. What if Infragard had an online forum that was protected but allowed anything you wanted to talk about without being too confusing and splintered into subforums? Then again, all it takes is a copy-and-paste and “sensitive” information is leaked. Pooh.

I’m stopping before I ramble some more… I think it’s time to start idling in IRC more and participating in some nice forums…digital social networking, if you will.

virginia timebomb puts more awareness onto insiders

Krebs has a story up about malware “destroying” 800 systems for the city of Norfolk, Virginia. Reading it drives home a few points, not all of which make me happy. I will say, it sucks bad enough to have power issues that affect lots of things, but it would suck worse to have to expediciously rebuild nearly 800 machines.

1. I’d conjecture almost every organization has a vested, financial interest in getting systems back to operation as quickly as possible. department heads, directors, managers, and the staff are all measured by that reaction. In addition, I doubt few organizations have extra staff and equipment on hand to handle any incident that effects even a fraction of their systems. This means there is often all the pressure in the organization to wipe off systems and get them back up and running, slapping hands along the way of those who stored documents improperly on their local systems. And very little pressure to preserve evidence or dig deeper in defining and scoping the malware and/or intrusion. Sad, but true.

2. “Insider” gets mentioned, and honestly, probably appropriately. But that never helps with my work, mainly because I’m an insider and an admin, and locking/auditing me can only lead to inefficiencies. Yes, I’m biased. But I get the desire, from an organizational standpoint, to prevent one rogue admin for stomping on the balls of whomever. I just don’t have to entirely like it, and I prefer to say things like, “If you can’t trust your admins, you need to question your hiring practices.” Besides, solving issues surrounding godlike admins is a rather tough (read: costly) task.

3. As commentors on the article have said, it is nice to have data storage policies and even some controls in place, but if users want to save things to their systems, they’ll find ways to do it. This dives deeply into our “gambling” sort of view to risk. Everyone has some inkling that their system hard disks are not magic and will fail eventually, but many people take the gamble and do nothing about it. This is one of those places where FUD scare tactics user education helps.

4. As always with reports like this, I’m left hungry for technical details. But I’m getting used to being unsatiated in that regard. At least I can trust what Krebs does report, and I believe he has reported all *he’s* gotten, too. Likewise, it begs questions like, could endpoint security have detected this? any sort of integrity auditing? And so on…at least, those are my questions I’d love to have answered if I sat in their SOC (if they have one).