jay beale releases the middler

Thank you to Tyler (SSLFail.com) for posting that Jay Beale has (finally!) released The Middler (sorry, no front page discussing it, just a direct link). Released, but it looks like, upon a very quick glance, that it might not be nearly finished yet. The Middler was discussed at Defcon 16. It is a tool that can inject into http traffic between client and server, intercept and reuse session credentials, and more. In short, this is a tool that automates what many of us have known can happen when you’re on a non-trusted LAN. Only scarier. And more accessible.

By the way, props to Jay for apparently skipping ahead to the demos. There is a ton of information in his presentation and all of it relevent, but I was a bit disappointed in not seeing many demos at the Defcon talk. Despite that, his was one of the best talks I saw there!

oscp carries some digital street cred

Grats to Mubix on his OSCP! In his post he talks about how the OSCP won’t get anyone a job, and I think he’s 99% correct. However, the caveat to that is to anyone that would know what the OSCP is, it does have meaning. So the other 1% might be a manager who knows the OSCP and knows that anyone who has it probably has a certain level of geekery and interest in security beyond what even the CISSP will demonstrate (e.g. those sales people who are required to get CISSP and finally do so on their 6th try…). This is part of the reason I want to get back to the OSCP afer my ill-fated attempt last year (right when I got slammed with a coworker quitting). The other part being that it actually is freakin hands-on!

core releases vnc client security advisory

If you use a VNC product, more specifically UltraVNC or TightVNC (or others), you probably want to keep your eyes open for an upcoming new version of the client. Core released a VNC security advisory, and from the sound of it, a workable exploit is likely (hi Metasploit!).

Offsetting that risk, the exploit is on the client and not the server. This means an attacker has to not only get a workable exploit, but get a VNC user to connect to an untrusted or subverted VNC server. If you automatically have .vnc files mapped to the VNC client, this is where it might be useful for Metasploit to have a fake VNC server module to trick admins to connecting back to an attacker.

Now, I often get back to ideas on making a network more hostile to attackers, and this can be another opportunity, especially if a workable exploit is developed or released. Get your hands on a subverted VNC server, set it up in some dark space or honeypot area of your network and wait for someone attempt to connect.

and the cow jumped over the moon

We use a Cisco SSL VPN at work. One of the features we have turned on when a user connects is a keylogger scanner. It just scans and alerts, but takes no administrative action. This scan seems to be rebooting the client machine on a couple of our users, and we’re not yet sure why. While discussing this in a team meeting, my boss made mention that when the keylogger check runs on his system, it flags two benign files that are false positives. He clicks Ok and continues on. The question he raised is, “What value is this check giving us if users will just click through?”

I gave it some thought over lunch. The direct value may not be much. In fact, it may result in 0 improvement to users (since they won’t know what to do with the keylogger alerts) and may not prevent any infected systems from entering our network (users can just click through). If we turn on administrative action by the VPN client, obviously legitimate users will be denied ability to do work.

There are a few indirect values to still having the keylogger on, even if it ultimately fails.

1. The keylogger may log what it detects on and whom, so we have some statistics and auditing in case something bad happens, or someone else gets in.

2. Information is given to those few users who may investigate the issues and improve their knowledge and system health. Not doing alerts perpetuates ignorance.

3. We potentially can prevent bad systems from entering our network, or capturing login information. And let’s face it, logging our VPN IP and login information is instant ownage. This potentiality may be worth it alone.

Of course, there are costs which might outweight these indirect “vaules” that I see.

Ultimately, my boss mentioned in the meeting that it is clear that digital security is still not ready to be consumer-grade. And people certainly aren’t ready to handle it themselves, for the most part. I tend to agree with him. I prefer my controls to be transparent to users as much as possible, but as good as possible as well. Unfortunately, we won’t achieve security this way, but I feel the best returns are available on the technical side rather than relying on people.

pci, shifting blame, and perfection assumptions

I was going to shut up about Heartland, until I read Anton Chuvakin’s part III post which pointed me to a post by Verisign. After reading Verisign, read the other links Anton lists; at least one readdresses what struck me about Verisign’s post:

In our investigations of PCI related breaches, we have NEVER concluded that an affected company was compliant at the time of a breach. [emphasis theirs] PCI Assessments are point-in-time and many companies struggle with keeping it going every day.

Is there a problem with PCI? If there is one, the problem lies in the QSA community…, not the standard itself…

And Anton adds this, although I’m not sure if he’s being sarcastic or not:

Think about it! It was always either due to changes after an audit or due to an “easygrader” (or even scammer) QSA.

The above lines of thinking strike me as a dangerous place to tread. Fine, maybe we get it through enough heads that PCI is not and was never meant to be a perfect roadmap to perfect security and martinis on a tropical beach.

So we shift the “perfection” to be on the QSAs? Or maybe shift the “perfection” to be on the host company? Or shift the blame to PCI only being point-in-time (duh)? These are dangerous roads whose underlying assumption is that there is a state of security.

QSAs can only be as good as the standards, visibility, power, talent, and cooperation of the host customer. The host customer can only be as good as the talent, corporate culture/leadership, and budget (yeah, I said it!) allows them. PCI can only be as good as the authors and adherence to the spirit of the rules by the customer and QSA.

To me, this isn’t an easy answer, but I’d rather not throw blame around more than necessary. I can’t blame a QSA unless they are specifically negligent, because all QSAs will make a mistake at some point, even if that mistake is because the customer didn’t give them the necessary visibility or because of some brand new technology or 0day that no one has been testing for. In that situation, no QSA will ever measure up unless they are bleeding edge and do continuous testing/auditing.

If there is any place to lay blame, it has to end up on the shoulders of the corporate entities (or any entity). They ultimately are the place that holds the keys to the most variables. Indeed, the ultimate place that needs to make the fixes and demonstrate their commitment to security is the corporate entity. Even with the absence of PCI and QSAs, they still have to buck up.

answer found on page 128

Us technical geeks love solving problems, and we tend to see various things in the world as problems to be solved. We even argue amongst ourselves quite geekily from tech topics to religion to wars to rhetoric. We see everything as a problem that *must* have a solution out there. We immediately view any voiced opinion as a challenge to be overcome.

We probably all did some sort of logic puzzle books or crossword puzzle books as kids. But I wonder how different our worlds might be if not every puzzle in those books had a possible solution hidden away in the back.

i don’t wanna wait in vain for your love

This article on the continuing saga of the Heartland Payment System data breach falls under the category of, “…no shit, you make a great and obvious point! By the way, that’s egg dripping off your face, right?”

He has called for greater information sharing to prevent cyber-criminals from using the same or similar techniques in multiple attacks.

“I believe that had we known the details about previous intrusions, we might have found and prevented the problem we learned of last week,” [CEO Robert] Carr said.

Obviously I pine about this sort of thing regularly. I think Jericho put it best on the infosecnews mailing list:

Great! I’m glad to hear Mr. Carr is all about sharing information. I take it to mean that we will get the full story about what happened at Heartland first, to show that he is serious about sharing information. Afterall, by his reasoning, if he shares this type of information with the world, then he may help prevent another intrusion like it.

Lastly, Mr. Carr, I can point you in the direction of any number of people who know and can share details on how to be better with security, some of whom may be technical employees in your own business. Don’t spread the blame of personal and corporate ignorance across an entire industry (even if that is true, don’t dilute the issue of Heartland in particular). At some point, someone made a mistake, made a poor risk acceptance, or decided that feigned ignorance is best (a tactic we’re taught from childhood…). I don’t mind if those above possibilities are the real reason (it happens!), but I do mind when someone tries to avoid admitting as much.

boy impersonates a cop, fools other cops

And this story of a 14-year-old boy impersonating a police officer for 5+ hours falls into the category of, “…and this is why we try to take human judgement* out of security controls.”

One source said he was told the teenager “coded a couple of assignments” — meaning he used police codes to let a dispatcher know how he and his “partner” were handling particular calls. The source said he also was told the teen was allowed to drive the squad car.

He was allowed to do this because he was familiar with the protocols (how familiar does that sound to anyone knowledgeable about social engineering?) and because controls were skipped (roll call, etc). D’oh! Maybe this was a Superbad moment?

Side note: Why don’t more people do things like this? Like so many crimes, they are not terribly hard to commit. The hardest part is crossing that very distinct moral line we have between what is right and wrong. Peer pressure influences this line, as does mental stability or digital anonymity (or distance maybe). And once you cross that line once, crossing it again becomes easier (downware spiral of repeat offenders). We rely heavily on this line.

* Note that we try to do this, but obviously this cannot always be done and there will always be a need for human decision-making or agility. But we try to, because we know which one we can trust, when created and maintained properly.

fannie mae logic bomb planted by fired employee

This Wired article on a Fannie Mae logic bomb falls into the category of, “..and this is why we stress consistency in doing the simple things in security.”

On the afternoon of Oct. 24, he was told he was being fired because of a scripting error he’d made earlier in the month, but he was allowed to work through the end of the day…

Five days later, another Unix engineer at the data center discovered the malicious code hidden inside a legitimate script that ran automatically every morning at 9:00 a.m. Had it not been found, the FBI says the code would have executed a series of other scripts designed to block the company’s monitoring system, disable access to the server on which it was running, then systematically wipe out all 4,000 Fannie Mae servers, overwriting all their data with zeroes.

How many times is a termination handled like this? Probably more regularly than I’d like to know. And how many times does it take to cause a business some serious problems? Just once.

By the way, how many reasonable people would finish out their day at work after being terminated? Sure, plenty would, but man that is a horrible decision by HR/manager.

the zombie apocalypse is coming!

Michael Gorsuch posted this article of a Texas road sign that was changed to display warnings of “ZOMBIES AHEAD!” I really can’t stop giggling about this, so I had to look up some more pics and info here and here.

I think this is hilarious! Although, yes the signs are there for a reason, but if someone sees orange construction equipment ahead and flashing signs, they really should be exercising caution, even if the sign is working, broken, or tampered with. If I saw this on the way to or from work, that would totally make my day.

It annoys me that people think someone “hacked” into this. Almost certainly the control box was not locked and was still using the default password. A bad move, but I’m not surprised considering the people who use these and deploy them state-wide. The last thing you want is to have your technician out on the road and unable to log into or unlock a construction sign. Fine, maybe someone did break a lock and maybe guess a password, but any non-hacker could do that. Next thing I know someone will break a window and rob a house and hackers will be blamed!

making choices in new technologies

Over the years, one lesson I am learning is being able to spot which trends in IT and security are things to do sooner than later (disk encryption) and which things are too new, too infant, too complicated, or simply have too few threats to do now (virtualization security). Certainly allows time to focus on the important things and simply be aware of the future things…

simplify

Warning: This isn’t a normal geek/tech post. There must be something in the digital air that is promoting personal posts this week…

I don’t typically read more than a few skimmed words in Rothman’s first section on his regular posts (they’re always more personal), but today I read a bit more. “I can only hope at least some of us have gotten past the greed of the past 20 years. I know that’s being way too idealistic, but we can hope, no?”

Yeah, sadly, I’m not even that optimistic to even think that. 🙂 Sometimes economic woes can be caused by natural issues (both nature and just natural economic cycles) or global influences. But, in my non-expert opinion, our current climate was caused by the confluence of just two* things: greed and affluence-addiction.

Greed. There’s no real need to expound on this topic. Corporations are greedy, individuals in corporations are greedy, and individuals themselves are greedy. It just gets back to one of the insinuated tenets of capitalism: always increase profits. There is no plateau, no arrival at happiness or some financial equilibrium of bliss. Unless policy or corp culture/leadership provide hard stops, the risky decisions continue.

Affluence-addiction. Sure, this is my own term I made up today, but it’s what I feel drives too-high mortgages/household budgets, gas-guzzling but “impresses the coworkers” SUV tanks and v8 cars, and exponential credit debt. Some odd need to always have better, perfect, impressive, and increasingly costly affluent luxuries. The drive that tells someone to go wash their car every 3 days so it looks pretty (especially on Saturday so it looks good in the Sunday church parking lot). I feel this every time I see a report on how some family is having budget issues as they send their kids to private school, drive two cars, and want their 250 channels of cable (or whatever it is people watch today, tivo?). Or drives the decision for a automotive exec to fly in a private jet to beg for money, but then continue to avoid the dirty commercial airlines to drive themselves on their second try.

I like my affluence as much as anyone, and I have my costly hobbies and interests, but I don’t like it being taken to non-practical excess.** I do have a little Emerson or Thoreau in me, and that’s the part writing today. There has got to be something said about the slave-driver weight of debt being indirectly related to happiness…

* Yes, I’m sure there are more, especially the long-term issues like maybe a governmental administration or long-term 9/11 fallout or whathaveyou, but I consider those, ultimately, to be minor influences.

** If you want a popular movie that explores a very similar topic, watch American Beauty. And try to compare every character in the movie on a scale of superficial down towards “underlying value.” A hint: stalker boy is one extreme, real estate wife is the other.

comparing web app scanners

Anantasec has posted a review/comparison of three major web app security scanners: AppScan, WebInspect, and Acunetix. This is an excellent-looking report! Just to save time for anyone curious about the results, AppScan lagged behind the other two in detecting vulns. Acunetix certainly scores well when you get a chance to use the AcuSensor piece. I personally have only briefly used/seen WebInspect. Basically I’ve never had a budget to get real hands-on with them.

chasing the ghost that is file integrity checking

When I read these two lines from Andrew Storms over at the nCircle blog, I got a little pissed off. Then I read them again and said, “Oh, yeah!” The post subject is the Heartland Payment Systems data breach and how there is little excuse for the lack of detection:

Many well performing products are available on the market today to perform system integrity monitoring. A basic email alert to an IT systems administrator could have done much to dam the flow.

Of course, quickly reading I missed that he is talking about a small slice of a security posture, but one that is exceedingly important when it comes to malicious software installs on server: system integrity monitoring (aka file integrity, digital integrity, etc).

Sadly, this is a slice that I don’t think is present enough, especially in the Windows space. I believe Tripwire Linux is still free, as are possibly others, but pretty much anything for Windows beyond homegrown scripts is yet another budget cost. My last two companies have not had any digital integrity software in place beyond your normal AV/AM pieces. Of course, anything that already has an agent on the server should be putting this in as a feature, eh? Well, as long as they aren’t one of the Big Boys who get disabled or thwarted as a first step in an attack…

This is yet again all part of a layered defense. Yes, people should not be doing much on servers such as browsing anything or installing much beyond what is needed. Yes, the network should have controls to limit access whether that be direct or pivoted (like Skoudis’ latest hacking challenge answer from McGrew). Yes, there should be network monitoring to find anomalies in egress and ingress, let alone some sort of IDS presence (come on, all that pilfered data had to either be sent out or stored in some constantly growing file!). Yes, server roles should be limited as much as possible, if only to allow regular deletion and rebuilding nodes in a cluster when they become inconsistent or “weird” as we call it. Blah, blah, system monitoring, blah, change management, blah, blah…

Why is it difficult to get this integrity monitoring? I can only guess. Money for yet another tool? Someone to install it on all the servers and tune it to ignore all the normal things like Windows patches? Lack of trust that ninja-like malware will get in underneath and root down lower than these checks?* Someone to watch all the alerts that come in and check them out? Maybe a lack of technical knowledge in someone who is “just watching alerts?” Or lack of knowledge to look far enough to explain an alert rather than write it off as yet another “Windows just being Windows?” Who knows, but all of these reasons don’t surprise me.

* Really, how often have we seen or heard of cutting edge techniques truly being used by people in the Crazy-Fu level of black hat criminal demigods? Maybe they don’t get caught, but my guess is that everything else is still so easy that there is no need to bother!

but it’s possible, right?

This is one of the fundamental differences between IT security and IT operations (or a difference between haphazard IP operations and properly managed IT operations):

web dude: “I need you to give a development service account access to a staging environment system for me to get a project done.”

sec dude: “Umm, no, you need to use a staging account in the staging environment.”

web dude: “Are you saying no because you don’t want to, or because you can’t do it?”

sec dude: “I’m saying no because that’s not how we manage and operate our environment.”

web dude: “But it’s possible, right?”

sec dude: *sigh*

It’s one of those “always painful” parts of what we do… Yes, it’s possible. It’s also possible for me to clone my HID card and leave them scattered in the parking lot just in case someone gets stranded and needs a warm place to wait while help arrives. It’s possible for me to open up the firewall to allow everything in and out. It’s possible for me to give everyone admins rights to their machine, go home, unplug my phone, and ignore frantic calls for help when things break. Yes, it’s possible, but it’s illegal/prohibited/stupid.

Further conversation can go down the topics like the difference between the right and wrong of most crime versus the right and wrong of digital practices/security; or how layered protections that go beyond the level of knowledge by the web dude in the above example will succinctly quell his protests (he doesn’t know I limit accounts to certain servers); or how policy is enforced, etc.