web services for port probing outbound and inbound

Don’t have things set up at home and need to probe an open port from inside a network? Try out portquiz.net which listens on all TCP ports.

Need something to probe an external port (maybe because you can’t hairpin to the external interface on your firewall)? Try out www.t1shopper.com/tools/port-scan/.

I have no affiliation with these, nor do I attest to their legitimacy. Just tools available out on the web. I use these to test out logs/firewalls.

neiman marcus breach info from businessweek

Just like Target, we’re hopefully going to hear a lot more about the Neiman Marcus breach. Such as Sophos’ Naked Security reporting on a businessweek article: Neiman Marcus Hackers Set Off 60,000 Alerts While Bagging Credit Card Data. Quotes below will be from the businessweek article.

…a spokeswoman for Neiman Marcus, says the hackers were sophisticated…

Has there ever been a newsworthy breach that was *not* described by the victim as “sophisticated?” Please, stop. Even if they were, please stop with the implied excuse that they were sophisticated and thus oh so hard to prevent so please sympathize with us. /fairmaidenindistressvoice

According to the report, Neiman Marcus was in compliance with standards meant to protect transaction data when the attack occurred.

Pray tell what data security requirements these were: internal? industry? PCI? And I require an explanation of why the requirements were met and yet an attack succeeded, not only in penetrating a network, but remaining in a network, planting code repeatedly on trusted devices, and exfiltrating card data. I’m not saying that security needs to be perfect or that requirements need to result in perfect security. But this is a gray line that needs to be spelled out. Otherwise I can make a shitty security policy, get hacked, and say the exact same line like it matters. “We were compliant with standards at the time the attack occurred.” Something clearly broke down or was missed. I need to learn from that.

The company’s centralized security system, which logged activity on its network, flagged the anomalous behavior of a malicious software program—although it didn’t recognize the code itself as malicious, or expunge it, according to the report. The system’s ability to automatically block the suspicious activity it flagged was turned off because it would have hampered maintenance, such as patching security holes, the investigators noted.

This is always a security bugaboo just waiting to bite someone; and it *will* *always* bite someone. Either you turn this on and get things stopped that should be stopped…and almost certainly hamer maintenance or legitimate business and incur the wrath of business managers…or you let it run much looser and not get in the way of business and hope your eyeballs catch the bad things. This is always a tough proposition in anyone execpt the largest of companies. I do actually sympathize on this, while at the same time wishing they had done it correctly (which itself is a moving target).

“These 60,000 entries, which occurred over a three-and-a-half month period, would have been on average around 1 percent or less of the daily entries on these endpoint protection logs, which have tens of thousands of entries every day,”

If there is an elephant in the room where we’re talking about digital security, then there’s a room outside the one we all look at, and inside *that* room is a larger elephant. And that elephant is alert tuning and watching. No product turns on and is correct out of the box. This means every organization has a different posture on those tools that throw alarms. This means every organization’s alarm posture is dependent on their security staff. In addition, it is dependent on the securituy staff to sift through whatever alarms there are plus, when they can, sift through the false alarms just to make sure nothing weird is going on. All of this is hard, freakin’ work; time-consuming work; and is never seen as a value-add to anyone except organizations whose security is a core to their busiones. And if you think Neiman Marcus has it bad; visit any SMB in the country.

Should someone have noticed a nightly deletion of code off trusted devices? Maybe. I would kinda like to think so, but the realist in my is shaking his head in a not-positive fashion.

Sticking to the elephant in the room that contains the room; there is yet another one outside of even *that* room. And that room has a nastier elephant in it. This elephant does just one thing, he receites this litany: “If you staff a security team and they silently stop everything, the company will see them as unnecessary and cut back.” Often, a business only “sees” IT when issues happen. If everything is smooth, then clearly their job is easy and they can absorb cutbacks. So you kinda want to be good, but not so good that everyone wonders if you’re even doing anything. “I’m blocking attacks every day!” “Yeah, but *are* you really?” You gotta prove that to non-technical stakeholders.

“In an ideal world, your card-data network should be completely segmented from the general-purpose network,” said Robert Sadowski, director of technology solutions at RSA Security, a division of EMC (EMC). “Unfortunately, an ideal world is often different than reality.”

It’s like we’re on a safari, since that’s another elephant in the corner! It’s very easy to talk about segmentation and separation. It’s easy to pad diagrams and plans and even sneak in talk about VLANs and traditional broadcast separation. But pull up those covers, and you’ll see a long gray snout and sad black eyes looking up at you. True separation is difficult. It means a separate core, separate switches, separate virtualization hosts if you’re a virtual shop, separate Internet links if you have many remote locations, or at least heavy separation with access control devices (ACL or firewalls, pretty much) in place between the two. When you get strict about it, that’s shit gets expensive to a business very quickly.

Neiman Marcus was first notified of a potential problem on Dec. 17 by TSYS (TSS), a company that processes credit-card payments, according to the report. TSYS linked fraudulent card usage back to what’s called “a common point of purchase”—in this case, Neiman Marcus stores.

I always mention this as a way to say, “So, how was the breach noticed?” Kudos to the processors and banks and such for having fraud departments that investigate things like this. And those people who trawl carder sites for new caches of numbers and who try to identify where they come from and alert proper authorities. Clearly corporations are going to continue to need and rely upon this backwards alerting. “Oh crap, I’m glad someone was watching that pawn shop. I had no idea my house was broken into until someone said they saw my television in the pawn shop window.” (Ok, that’s not totally fair, since data isn’t removed, rather copied…)

The Target hackers used a protocol known as FTP, for file transfer protocol, to extract the card data, Raff said. The Neiman Marcus hackers used custom hacking software and sent the data out through a virtual private network, or VPN, Raff said, based on facts from the report.

No. At this point I’m spent. I don’t even want to go into how a VPN was set up on what I guess was the compromised central server that was in both protected and Internet-facing networks. (It’s the former bit of that sentence that I don’t like; not the latter; which is necessary.) However, kudos on the attackers for encrypting their exfiltrated data.

Nothing has been said about the initial breach into the network, but it’s almost certainly that server that is internet-facing that was mentioned in the article. Here’s hoping it’s Windows running asp.net and not patched…

2-factor auth, target, remote access, and segregation

For the next year, we’re going to hear a ton of speculation and details and suggestions and eventually facts on the recent Target data breach. Whee! It is, however, a personal pet peeve when expectations are made higher than they should be. Case in point follows!

So Target was breached, and Brian Krebs posted an article about how the attackers may have (read: probably) piggy-backed into the Target network by using the credentials of a third party vendor who apparently provided project management services (or HVAC services, the actual business relationship details are vague) to Target and thus had the ability to remotely connect to Target’s network. Makes sense!

Sophos’ Naked Security blog jumps in as well with Did the crooks who broke into Target tailgate the cleaners? and A hearty welcome to all Cyberoamers! The combination of these articles triggers a few thoughts.

First, it’s “easy” to require two-factor authentication for individual users. It’s more difficult to require it for an entire vendor. Who at the vendor gets the other auth factors? Do they share them? Is it software-based? There are logistics questions going on here that make this an annoying task, especially when something like this is planned, requested, and completed more than likely without any oversight in many companies. This is because it’s easier to just do it, and not involve cost centers like security.

Second, I don’t want alarms on remote connections occuring at 2am. I’m sorry, a firm may not have any business connecting at that time (this is why you time-box accounts or the remote connection portal), but sometimes someone may be burning the midnight oil and I don’t want to spend much time chasing these things down every morning when I check out my SEIM dashboard. Yes, you should log these. No, these aren’t valid alarms that should have, on their own, scrambled the security teams.

Third, HVAC and/or physical equipment vendors do routinely require some sort of remote access. This isn’t strange or rare, and is probably especially true when your business owns and operates, in full, building facilities in hundreds of locations.

Fourth, it’s probably not uncommon that the same pipes that connect remote facilities vendors to your remote facilities also connect your payment and data communication to your remote facilities. It’s annoying (not impossible, but highly annoying and costly) to get those truly separated. In other words, I think it would be, very strictly speaking, very annoying to truly segregate retail payment in-scope systems and networks from those that are not in-scope for PCI. This is because it’s easier to just do it, and not involve cost centers like security and IT, which then have to solve the above headaches and I can tell you it won’t effect the retail business revenues in any positive way.

Now, I’ll admit I’m nitpicking here. The major questions still remains as the articles all ask: Why did this third party have access to not only, apparently, the full internal Target network, but access into every remote facility? (I know, it’s easier to just make normal accounts than to take the time to lock them down or limit their scope with whatever remote access tools you’re using.) Why are the payment systems not segregated? (Despite being annoying, this is *still* a valid question to keep on the table.) Where was the rest of the monitoring such as on POS systems, netflow traffic egress, and so on?

Damn, IT and security cost so much money! 🙂