and the cow jumped over the moon

We use a Cisco SSL VPN at work. One of the features we have turned on when a user connects is a keylogger scanner. It just scans and alerts, but takes no administrative action. This scan seems to be rebooting the client machine on a couple of our users, and we’re not yet sure why. While discussing this in a team meeting, my boss made mention that when the keylogger check runs on his system, it flags two benign files that are false positives. He clicks Ok and continues on. The question he raised is, “What value is this check giving us if users will just click through?”

I gave it some thought over lunch. The direct value may not be much. In fact, it may result in 0 improvement to users (since they won’t know what to do with the keylogger alerts) and may not prevent any infected systems from entering our network (users can just click through). If we turn on administrative action by the VPN client, obviously legitimate users will be denied ability to do work.

There are a few indirect values to still having the keylogger on, even if it ultimately fails.

1. The keylogger may log what it detects on and whom, so we have some statistics and auditing in case something bad happens, or someone else gets in.

2. Information is given to those few users who may investigate the issues and improve their knowledge and system health. Not doing alerts perpetuates ignorance.

3. We potentially can prevent bad systems from entering our network, or capturing login information. And let’s face it, logging our VPN IP and login information is instant ownage. This potentiality may be worth it alone.

Of course, there are costs which might outweight these indirect “vaules” that I see.

Ultimately, my boss mentioned in the meeting that it is clear that digital security is still not ready to be consumer-grade. And people certainly aren’t ready to handle it themselves, for the most part. I tend to agree with him. I prefer my controls to be transparent to users as much as possible, but as good as possible as well. Unfortunately, we won’t achieve security this way, but I feel the best returns are available on the technical side rather than relying on people.

pci, shifting blame, and perfection assumptions

I was going to shut up about Heartland, until I read Anton Chuvakin’s part III post which pointed me to a post by Verisign. After reading Verisign, read the other links Anton lists; at least one readdresses what struck me about Verisign’s post:

In our investigations of PCI related breaches, we have NEVER concluded that an affected company was compliant at the time of a breach. [emphasis theirs] PCI Assessments are point-in-time and many companies struggle with keeping it going every day.

Is there a problem with PCI? If there is one, the problem lies in the QSA community…, not the standard itself…

And Anton adds this, although I’m not sure if he’s being sarcastic or not:

Think about it! It was always either due to changes after an audit or due to an “easygrader” (or even scammer) QSA.

The above lines of thinking strike me as a dangerous place to tread. Fine, maybe we get it through enough heads that PCI is not and was never meant to be a perfect roadmap to perfect security and martinis on a tropical beach.

So we shift the “perfection” to be on the QSAs? Or maybe shift the “perfection” to be on the host company? Or shift the blame to PCI only being point-in-time (duh)? These are dangerous roads whose underlying assumption is that there is a state of security.

QSAs can only be as good as the standards, visibility, power, talent, and cooperation of the host customer. The host customer can only be as good as the talent, corporate culture/leadership, and budget (yeah, I said it!) allows them. PCI can only be as good as the authors and adherence to the spirit of the rules by the customer and QSA.

To me, this isn’t an easy answer, but I’d rather not throw blame around more than necessary. I can’t blame a QSA unless they are specifically negligent, because all QSAs will make a mistake at some point, even if that mistake is because the customer didn’t give them the necessary visibility or because of some brand new technology or 0day that no one has been testing for. In that situation, no QSA will ever measure up unless they are bleeding edge and do continuous testing/auditing.

If there is any place to lay blame, it has to end up on the shoulders of the corporate entities (or any entity). They ultimately are the place that holds the keys to the most variables. Indeed, the ultimate place that needs to make the fixes and demonstrate their commitment to security is the corporate entity. Even with the absence of PCI and QSAs, they still have to buck up.

answer found on page 128

Us technical geeks love solving problems, and we tend to see various things in the world as problems to be solved. We even argue amongst ourselves quite geekily from tech topics to religion to wars to rhetoric. We see everything as a problem that *must* have a solution out there. We immediately view any voiced opinion as a challenge to be overcome.

We probably all did some sort of logic puzzle books or crossword puzzle books as kids. But I wonder how different our worlds might be if not every puzzle in those books had a possible solution hidden away in the back.

i don’t wanna wait in vain for your love

This article on the continuing saga of the Heartland Payment System data breach falls under the category of, “…no shit, you make a great and obvious point! By the way, that’s egg dripping off your face, right?”

He has called for greater information sharing to prevent cyber-criminals from using the same or similar techniques in multiple attacks.

“I believe that had we known the details about previous intrusions, we might have found and prevented the problem we learned of last week,” [CEO Robert] Carr said.

Obviously I pine about this sort of thing regularly. I think Jericho put it best on the infosecnews mailing list:

Great! I’m glad to hear Mr. Carr is all about sharing information. I take it to mean that we will get the full story about what happened at Heartland first, to show that he is serious about sharing information. Afterall, by his reasoning, if he shares this type of information with the world, then he may help prevent another intrusion like it.

Lastly, Mr. Carr, I can point you in the direction of any number of people who know and can share details on how to be better with security, some of whom may be technical employees in your own business. Don’t spread the blame of personal and corporate ignorance across an entire industry (even if that is true, don’t dilute the issue of Heartland in particular). At some point, someone made a mistake, made a poor risk acceptance, or decided that feigned ignorance is best (a tactic we’re taught from childhood…). I don’t mind if those above possibilities are the real reason (it happens!), but I do mind when someone tries to avoid admitting as much.

boy impersonates a cop, fools other cops

And this story of a 14-year-old boy impersonating a police officer for 5+ hours falls into the category of, “…and this is why we try to take human judgement* out of security controls.”

One source said he was told the teenager “coded a couple of assignments” — meaning he used police codes to let a dispatcher know how he and his “partner” were handling particular calls. The source said he also was told the teen was allowed to drive the squad car.

He was allowed to do this because he was familiar with the protocols (how familiar does that sound to anyone knowledgeable about social engineering?) and because controls were skipped (roll call, etc). D’oh! Maybe this was a Superbad moment?

Side note: Why don’t more people do things like this? Like so many crimes, they are not terribly hard to commit. The hardest part is crossing that very distinct moral line we have between what is right and wrong. Peer pressure influences this line, as does mental stability or digital anonymity (or distance maybe). And once you cross that line once, crossing it again becomes easier (downware spiral of repeat offenders). We rely heavily on this line.

* Note that we try to do this, but obviously this cannot always be done and there will always be a need for human decision-making or agility. But we try to, because we know which one we can trust, when created and maintained properly.

fannie mae logic bomb planted by fired employee

This Wired article on a Fannie Mae logic bomb falls into the category of, “..and this is why we stress consistency in doing the simple things in security.”

On the afternoon of Oct. 24, he was told he was being fired because of a scripting error he’d made earlier in the month, but he was allowed to work through the end of the day…

Five days later, another Unix engineer at the data center discovered the malicious code hidden inside a legitimate script that ran automatically every morning at 9:00 a.m. Had it not been found, the FBI says the code would have executed a series of other scripts designed to block the company’s monitoring system, disable access to the server on which it was running, then systematically wipe out all 4,000 Fannie Mae servers, overwriting all their data with zeroes.

How many times is a termination handled like this? Probably more regularly than I’d like to know. And how many times does it take to cause a business some serious problems? Just once.

By the way, how many reasonable people would finish out their day at work after being terminated? Sure, plenty would, but man that is a horrible decision by HR/manager.

the zombie apocalypse is coming!

Michael Gorsuch posted this article of a Texas road sign that was changed to display warnings of “ZOMBIES AHEAD!” I really can’t stop giggling about this, so I had to look up some more pics and info here and here.

I think this is hilarious! Although, yes the signs are there for a reason, but if someone sees orange construction equipment ahead and flashing signs, they really should be exercising caution, even if the sign is working, broken, or tampered with. If I saw this on the way to or from work, that would totally make my day.

It annoys me that people think someone “hacked” into this. Almost certainly the control box was not locked and was still using the default password. A bad move, but I’m not surprised considering the people who use these and deploy them state-wide. The last thing you want is to have your technician out on the road and unable to log into or unlock a construction sign. Fine, maybe someone did break a lock and maybe guess a password, but any non-hacker could do that. Next thing I know someone will break a window and rob a house and hackers will be blamed!

making choices in new technologies

Over the years, one lesson I am learning is being able to spot which trends in IT and security are things to do sooner than later (disk encryption) and which things are too new, too infant, too complicated, or simply have too few threats to do now (virtualization security). Certainly allows time to focus on the important things and simply be aware of the future things…

simplify

Warning: This isn’t a normal geek/tech post. There must be something in the digital air that is promoting personal posts this week…

I don’t typically read more than a few skimmed words in Rothman’s first section on his regular posts (they’re always more personal), but today I read a bit more. “I can only hope at least some of us have gotten past the greed of the past 20 years. I know that’s being way too idealistic, but we can hope, no?”

Yeah, sadly, I’m not even that optimistic to even think that. 🙂 Sometimes economic woes can be caused by natural issues (both nature and just natural economic cycles) or global influences. But, in my non-expert opinion, our current climate was caused by the confluence of just two* things: greed and affluence-addiction.

Greed. There’s no real need to expound on this topic. Corporations are greedy, individuals in corporations are greedy, and individuals themselves are greedy. It just gets back to one of the insinuated tenets of capitalism: always increase profits. There is no plateau, no arrival at happiness or some financial equilibrium of bliss. Unless policy or corp culture/leadership provide hard stops, the risky decisions continue.

Affluence-addiction. Sure, this is my own term I made up today, but it’s what I feel drives too-high mortgages/household budgets, gas-guzzling but “impresses the coworkers” SUV tanks and v8 cars, and exponential credit debt. Some odd need to always have better, perfect, impressive, and increasingly costly affluent luxuries. The drive that tells someone to go wash their car every 3 days so it looks pretty (especially on Saturday so it looks good in the Sunday church parking lot). I feel this every time I see a report on how some family is having budget issues as they send their kids to private school, drive two cars, and want their 250 channels of cable (or whatever it is people watch today, tivo?). Or drives the decision for a automotive exec to fly in a private jet to beg for money, but then continue to avoid the dirty commercial airlines to drive themselves on their second try.

I like my affluence as much as anyone, and I have my costly hobbies and interests, but I don’t like it being taken to non-practical excess.** I do have a little Emerson or Thoreau in me, and that’s the part writing today. There has got to be something said about the slave-driver weight of debt being indirectly related to happiness…

* Yes, I’m sure there are more, especially the long-term issues like maybe a governmental administration or long-term 9/11 fallout or whathaveyou, but I consider those, ultimately, to be minor influences.

** If you want a popular movie that explores a very similar topic, watch American Beauty. And try to compare every character in the movie on a scale of superficial down towards “underlying value.” A hint: stalker boy is one extreme, real estate wife is the other.

comparing web app scanners

Anantasec has posted a review/comparison of three major web app security scanners: AppScan, WebInspect, and Acunetix. This is an excellent-looking report! Just to save time for anyone curious about the results, AppScan lagged behind the other two in detecting vulns. Acunetix certainly scores well when you get a chance to use the AcuSensor piece. I personally have only briefly used/seen WebInspect. Basically I’ve never had a budget to get real hands-on with them.

chasing the ghost that is file integrity checking

When I read these two lines from Andrew Storms over at the nCircle blog, I got a little pissed off. Then I read them again and said, “Oh, yeah!” The post subject is the Heartland Payment Systems data breach and how there is little excuse for the lack of detection:

Many well performing products are available on the market today to perform system integrity monitoring. A basic email alert to an IT systems administrator could have done much to dam the flow.

Of course, quickly reading I missed that he is talking about a small slice of a security posture, but one that is exceedingly important when it comes to malicious software installs on server: system integrity monitoring (aka file integrity, digital integrity, etc).

Sadly, this is a slice that I don’t think is present enough, especially in the Windows space. I believe Tripwire Linux is still free, as are possibly others, but pretty much anything for Windows beyond homegrown scripts is yet another budget cost. My last two companies have not had any digital integrity software in place beyond your normal AV/AM pieces. Of course, anything that already has an agent on the server should be putting this in as a feature, eh? Well, as long as they aren’t one of the Big Boys who get disabled or thwarted as a first step in an attack…

This is yet again all part of a layered defense. Yes, people should not be doing much on servers such as browsing anything or installing much beyond what is needed. Yes, the network should have controls to limit access whether that be direct or pivoted (like Skoudis’ latest hacking challenge answer from McGrew). Yes, there should be network monitoring to find anomalies in egress and ingress, let alone some sort of IDS presence (come on, all that pilfered data had to either be sent out or stored in some constantly growing file!). Yes, server roles should be limited as much as possible, if only to allow regular deletion and rebuilding nodes in a cluster when they become inconsistent or “weird” as we call it. Blah, blah, system monitoring, blah, change management, blah, blah…

Why is it difficult to get this integrity monitoring? I can only guess. Money for yet another tool? Someone to install it on all the servers and tune it to ignore all the normal things like Windows patches? Lack of trust that ninja-like malware will get in underneath and root down lower than these checks?* Someone to watch all the alerts that come in and check them out? Maybe a lack of technical knowledge in someone who is “just watching alerts?” Or lack of knowledge to look far enough to explain an alert rather than write it off as yet another “Windows just being Windows?” Who knows, but all of these reasons don’t surprise me.

* Really, how often have we seen or heard of cutting edge techniques truly being used by people in the Crazy-Fu level of black hat criminal demigods? Maybe they don’t get caught, but my guess is that everything else is still so easy that there is no need to bother!

but it’s possible, right?

This is one of the fundamental differences between IT security and IT operations (or a difference between haphazard IP operations and properly managed IT operations):

web dude: “I need you to give a development service account access to a staging environment system for me to get a project done.”

sec dude: “Umm, no, you need to use a staging account in the staging environment.”

web dude: “Are you saying no because you don’t want to, or because you can’t do it?”

sec dude: “I’m saying no because that’s not how we manage and operate our environment.”

web dude: “But it’s possible, right?”

sec dude: *sigh*

It’s one of those “always painful” parts of what we do… Yes, it’s possible. It’s also possible for me to clone my HID card and leave them scattered in the parking lot just in case someone gets stranded and needs a warm place to wait while help arrives. It’s possible for me to open up the firewall to allow everything in and out. It’s possible for me to give everyone admins rights to their machine, go home, unplug my phone, and ignore frantic calls for help when things break. Yes, it’s possible, but it’s illegal/prohibited/stupid.

Further conversation can go down the topics like the difference between the right and wrong of most crime versus the right and wrong of digital practices/security; or how layered protections that go beyond the level of knowledge by the web dude in the above example will succinctly quell his protests (he doesn’t know I limit accounts to certain servers); or how policy is enforced, etc.

i don’t get the big deal of cloud computing

I don’t get it, but I admit I’ve not tried all that hard.

I actually don’t get “cloud computing.” No, I know the basics principles, but I don’t get why I need it, would ever want it, or ever care. Like “distributed computing” in an enterprise, it sounds economical in theory, but it seems otherwise impractical in the real world.

I understand that standard services can be outsourced/offloaded/clouded (depending on what era your marketing terms come from), like DNS or web acceleration or proxying. Or an Amazon storefront. Or CMS software. Or backup services from your data center. Whether I am Joe Blow or Susie Q, my needs will be pretty much the same thing and both of us can be serviced easily by the provider/outsourcer/clouder/offloader.

But I feel this only works when what you need is predictable by the vendor providing it, i.e. the more customized your needs are, the less you will ever be happy with what someone else builds. I see this quarterly in the pain levels of implementing third-party software and applications versus having in-house developers roll their own.

Fine, high-end number crunching may work, but I think those organizations with that need already invest a lot in the people designing such number crunching, and can probably fit into clouds better just by sheer numbers and mass. The people who still use mainframes, I guess. Maybe that’s the problem, maybe I’m just not in the mainframe space…

Update: I use my ISPs DNS services. Is that cloud computing? I also use GoDaddy as my registrar, and I may someday move to shared hosting. Is that also cloud computing? See, I don’t get it. 🙂

So it gets back to, why should I ever care about the cloud? I feel it sounds nice on paper, and for the few people who jump in with proper expectations it will be “just fine,” but for everyone else I think it will be more difficult to wrap heads around than keeping the computing in-house.

cutting corners with security*

There’s a comment over on Mogull’s blog post for the Heartland Payment Systems incident that was announced the other day. I wanted to link to it quick and highlight it. I won’t post the name or even copy the comment itself, but rather paraphrase (I’m just avoiding searches, especially if the comment gets removed later):

I have worked for the company for many years. They cut corners. They have big problems internally.

For the moment, let’s assume this comment is truthful and legit. A couple points I will use this for:

1. You get the real story on security the farther down into the trenches you get. Yes, you get far less actual risk management and ability to accept risk, but you get the real deal down with the techs who have their fingers on the pulse of the network and systems and processes. Any respectable security posture should include information-gathering from them.

2. Look behind the curtains of any company, and I would estimate that 99% cut corners, even up to making very huge mistakes or oversights. This is why pen-testing is not going away or beginning to die. This is economics, really, and part of the superficial facade that a business can throw up to anyone looking too closely. A role-play exercise for a security posture should be to pretend your systems and processes are suddenly transparent. What would the experts point out? What would Mike Rothman do? (Along the lines of ‘What would Brian Boitano do?”) This might throw eggs at “some security through obscurity,” but assume that still gives value and can be only looked at lightly. Really, the role-play should expose the real problems.

3. Is it possible for PCI to improve a poor security posture that has been an active choice for that entity? If a company is cutting corners, choosing to accept risk poorly, or simply incompetent, I would bet they will actively make sure PCI doesn’t catch it, or outright lie, fudge, or (hah) cut corners with the Assessor.

*”Cutting Corners With Security” reminds me too much of the book series that might read, “How to Cheat at Securing Your Shit.”

ev ssl fail or how to rebrand ssl and charge a premium price

The site SSLFail has rekindled my disdain for the “Extended Validation SSL” farce. It sounds lofty to have a CA validate that you are who you say you are, but all they really do is make sure you are a corporation or entity of some sort. After which (at least for the CA I use, which is one of the major 3), I can order as many EV SSL certs as I want and apply them to any domain that I can register. That includes domains that look like they might belong to someone else, i.e. their brand. I do this on a weekly basis for our clients. I’m not affiliated with company XYZ, but I sure can register a domain and purchase an EV SSL for it!

The first time my company acquired an EV SSL, it required some extra jumps through vague hoops. All I know is that it required a call to our main phone line (someone who claimed to be a receptionist) to then talk to one of the persons on our company charter (?) over the phone (someone who claimed to be the CFO). In our case, of course, these people were legit, but phone verification is ridiculous. I’m sure the CA looked up other things, but really the only information given was our incorporation date and entity type (corporation).

I imagine if I were a sole proprietor or LLC I’d still get approved, or at least an agent of mine would get it approved if they ran my web presence and I wanted EV SSL. Besides, like Blizzard not having real incentive to blacklist accounts or credit cards used to purchase exploitative accounts (read this book), what incentive is there for a CA to turn away my desire to purchase an EV SSL? Hah. Integrity and trust? Only if the process were totally transparent!

The point is, I’m less than impressed by the money-making scheme that EV SSLs are. And even less impressed by browsers forcing this adoption. It really is maybe the first time I think Firefox has failed me.