rogue wireless device scanning and pci

Need to comply with PCI? Whether you have wireless devices or not, you do need to scan and make sure you don’t have any popping up. This SPSP report goes into detail on this subject.

My biggest concern was the mention that using Netstumbler or Kismet to discover rogue access points is sufficient. I agree, but only if you’re constantly analyzing the results, i.e. not just doing a walk-through every quarter, month, or week, but rather have a dedicated system always looking. Not some point-in-time crap.

Why? Because an idle SSID-hiding AP will still be invisible to Netstumbler and Kismet (even a chatty SSID-hiding AP will hide from Netstumbler!). You need to capture even the small window where a wireless AP is talking.

By the way, I’m hoping some answers to EthicalHacker.net’s latest challenge will not only answer the second question (How were the kids able to access Greg’s rogue access point even though it was not detected during Mr. Phillips PCI compliance assessment?), but also explain how to detect a rogue wireless device that isn’t talking at the moment. I wasn’t sure if that is possible short of brute-forcing an SSID response or trying to get the AP to talk from wired to wireless somehow…

as if the state of pci wasn’t confusing enough

As if the state of PCI wasn’t confusing enough, here is a piece from ComputerWorld that basically makes my head explode:

A Gartner Inc. analyst is urging companies that do business with Heartland Payment Systems Inc. and RBS WorldPay Inc. not to switch to other payment processors just because of Visa Inc.’s decision this month to remove Heartland and RBS WorldPay from its list of service providers that are compliant with the PCI data security rules.

and later this:

Visa requires all entities that accept credit and debit cards issued under its name to work only with service providers that comply with the PCI rules, which are formally known as the Payment Card Industry Data Security Standard (PCI DSS).

But in a research bulletin issued yesterday (download PDF), Gartner analyst Avivah Litan said that customers can continue to utilize Heartland and RBS WorldPay without facing any fines from Visa.

My first reaction is, “So why the hell does PCI (or the PCI certified listing) matter?” Yes, I understand companies and people make mistakes and honestly this may not be reason to jump ship from an entity, but this certainly questions the relevance of PCI listings.

Well, we’ll make an exception to our own rules saying you need to work only with service providers that are certified?

They’re going to be recertified so stick with it for a bit? Are you sure? And what if they lapse at “a point in time” again?

PCI was not at fault because while HPS was certified at a point in time, it did not maintain that certification at every point in time? (Wow, that could be the infinitely defensible weasel-out card!)

By the way, their delisting is just a point in time thing, just wait?

So, we have this PCI certified listing that PCI itself wants you to adhere to, but if someone drops off, don’t worry about it because they’ll recover. Is there *any* reason left to worry about someone not appearing on that list or being delisted? Which is worse?

And I like the irony (?) of another recommendation in the same Gartner report:

All parties that handle cardholder data: Focus on maintaining continuous cardholder data security, rather than on achieving PCI-compliant status.

No shit? But isn’t that the “do it yourself all the time” attitude what keeps/kept us in a mediocre state in the first place?! It obviously does not work broadly, so we need a kick in the junk by something with steel toes. But do we really need limp steel toes too?

terminal23.net to open cloud computing services to public

FOR IMMEDIATE RELEASE

Terminal23.net is proud to announce their offering of cloud computing services to the general public. Terminal23.net will immediately begin offering blog, news, and commenting services to all customers through its stable and scalable cloud computing architecture. As a visitor to our service, the more you click around, the more our system recognizes this and provisions computing resources to serve your news needs. In addition, customers do not have to worry about the complexity of the underlying technology!
Terminal23.net is also proud to align itself with the Open Cloud Manifesto principles!

  • We are dedicated to working with other cloud computing providers to offer address the challenges of adoption our service may have, and to support ongoing standards. We have started by using common blog software, and a common layout of post title, body, date, and even comment services!
  • At no time will we lock our customers into using only our service. Feel free to read other blogs, too!
  • We will work diligently to align ourselves with existing standards wherever possible.
  • We will also be aware that needs for new standards will be met through collaboration rather than individual standard provisioning.
  • We will be committed to working with the community, not to further our own technical needs, but rather in response to customer needs.
  • We will…hell..these all sound the same anyway, so we just meet the last principle too!

Terminal23.net is excited for the future and our offering this new service to the public. This is a new chapter in our organization!

detecing conficker infections over the network

Dan Kaminsky released some information this morning that it is possible to remotely (and anonymously) detect if Conficker has owned a system. He does link to a POC scanner (python). This is the result of some work by Tillmann Werner and Felix Leder of the Honeynet Project. Looking forward to the paper!

Update: Here is more information about Conficker compiled by the handler’s at the SANS diary. I haven’t personally paid much attention to Conficker recently, mostly because we appear to be fully patched on known, managed systems where I work, so it has been a non-issue since Microsoft released them (MS08-067). That and it was pretty obvious the issue at hand was wormable and would be important.

on bad idea zombies and much more

I’m obviously catching up on some blogs on a rather nicely lazy Friday. Over at Teneble, they have a repost of Marcus Ranum’s recent keynote at SOURCE Boston, Anatomy of The Security Disaster. This is a long read, but exceedingly well worth it. I apologize for not looking too hard for a posted video.

So, what’s going on? We’ve finally managed to get security on the road-map for many major organizations, thanks to initiatives like PCI and some of the government IT audit standards. But is that true? Was it PCI that got security its current place at the table, or was it Heartland Data, ChoicePoint, TJX, and the Social Security Administration? This is a serious, and important, question because the answer tells us a lot about whether or not the effort is ultimately going to be successful. If we are fixing things only in response to failure, we can look forward to an unending litany of failures, whereas if we are improving things in advance of problems, we are building an infrastructure that is designed to last beyond our immediate needs.

gunnar’s he got game rule

A quick pointer to an excellent article by Gunnar Peterson talking about his “He Got Game” rule. In short, you gotta have game with coding if you want to tackle securing code. This runs parallel to my thinking that you have to know how to code before you can know how to secure your code. Adrian Lane adds an excellent comment as well, at least from what I pulled from it (something about it’s wording made me need to read it 5 times…)

I’ll state there are always exceptions, but I’d say those exceptions are not the norm at all. At least you can say if someone is technical in one area, they *could* have a small headstart in tackling another technical area. In the end, just like having a security mindset is a huge help for a security professional, having an aptitude and experience in coding is a huge help for a dev security pro.

I could simply be failing by generalizing way too much. 🙂

The difference in all of this to me is: TRAINING/PRACTICE. Whether it is self-prescribed or work-prescribed, training makes a difference.

As far as his book recommendation, I have no idea about it, but I’d be willing to give it a flip-thru to see if I could grasp it and benefit from it.

* The older I get and the farther away I get from the analog world, the more I wonder how the hell we used to write and add emphasis without markup tages or non-standard type (**, bold, italics, all-caps…) Then again, without computers, thinking about what job I would be working now leaves me blank too…

linuxhaxor.net: 10 twitter clients

LinuxHaxor.net has posted 10 Twitter clients for Linux. I’ve not used any of them; in fact, I’ve not used any Twitter clients so far. I Twitter from work (web) or through my phone. But I know that I’ll only get the most use out of Twitter if I can be less disjointed in my following and participation, and see twits as they get posted by the people I follow. And a nice way to scroll back over the last x hours I’ve missed (props to recent [this week] interface changes that improve this!) That will all require a Twitter client. So, someday sooner than later I’ll be trying these out and wanted to file away the link.

did you know how easy it was to hijack twitter via sms?

I missed this bit of news that Twitter accounts that have SMS texting turned on may have been hijackable for quite some time (I’m beginning to think Krebs is one of the only truly successful security journalists around!). Provided you know the mobile number someone has activated to be allowed to post Twitter messages, and you’re coming from an international location. Read the article for the details.

More disturbing is the tone of dismissal and lack of creative thinking from Twitter in regards to this issue. Sure they had a fix, but they certainly didn’t grasp the full issue.

In essence, we’re rolling new tech (and ways tech can interact with other tech) out faster than we can properly manage it. Then again…that’s nothing new, now, is it?

the danger of abstracting too far from the basics

I’ve been doing a little reading today, since it feels like Friday around here, and came across an article about space storms possibly creating disaster situations over large swaths of the US. This is due to our heavy reliance on the power grid for, well, pretty much everything.

The second problem is the grid’s interdependence with the systems that support our lives: water and sewage treatment, supermarket delivery infrastructures, power station controls, financial markets and many others all rely on electricity… “It’s just the opposite of how we usually think of natural disasters,” says John Kappenman… “Usually the less developed regions of the world are most vulnerable, not the highly sophisticated technological regions.”

Taking this down a bit into the IT infrastructure, this reminds me how we can become dependent on our own infrastructure to do common or even uncommon tasks. Web interfaces in a power outage or misconfiguration will be down. Do you know how to expediently console into your devices? Can you work on a command line? Do you have the documentation on how your scripts operate so you could do it manually in an emergency? Could you interpret tcpdump output if your network is being crippled by a worm, preventing IDS use?

Some of this comes down to something I believe in: the simple fundamentals. Tools are great to make us more efficient, but at the end of the day good IT persons are not defined by their GUIs. They are defined much like good ol’ Unix tools: how well they can use the simplest building blocks to get their tasks done. And how they can creatively chain those simple tools together to do fabulous things.

This also goes into security. We are not defined by the automated tools we use (those that are are script kiddies), but rather whether we understand how those tools work and could emulate similar behavior using the basics if need be.

Further we can expand this into our virtual infrastructure. If the host goes down, or hell, even just your virtual center client box, are you dead in the water? Would you be able to stand up a (*shiver!*) physical web server quick and get critical apps working while the host is being operated on?

Finally, this does echo an aspect to one of the simple security maxims that I believe was quoted or made popular by Schneier or Geer: “Complex systems fail complexly.”

on embracing failure

I’ve been getting behind on too many blogs these days, but this morning I was catching up with posts on the Security Catalyst site and have been impressed with the myriad contributors posting useful and dijestable articles. Nice!

One in particular by Adam Dodge reinforces something I’ve been trying to learn these last few years (and is also referenced in A Hacker Looks at 50 presentation). In essence: don’t be afraid to fail; don’t be afraid to be wrong; don’t be afraid to be ‘not perfect.’

I’ve seen this in many ways, in books for tech geeks, posts on blogs, and even leadership/CEO books. I’ve even experienced it because, let’s face it, we learn the most when we fail (or for us geeks, we learn the most when we’re troubleshooting). Waiting for perfection is inaction. We even learn this in relationships, the power of admitting to being wrong.

But damn is that paradigm hard to learn when we’re implicitly taught from childhood to adulthood in the workplace that you have to be right and it is bad to be wrong. Even topics I know I know very little about seem to have this urge to present oneself as knowledgable (such as nodding along with the service mechanic explaining what is wrong with my car!).

So it’s been a sort of quiet goal of mine to be wrong a bit more often, and ask more questions, even seemingly simple ones, just to allow me to understand things better. And rather than sit inactive waiting for knowledge on a topic like implementing a new system/tool, just do it and be ready to be wrong.

Kinda like being ready for the inevitable security incident, eh?

I could even bring this around back to gaming. In order to be a good player, you have to take those small steps where you bumble around a map, try to learn the buttons, and figure out tactics. You’ll take those 0-20 lumps. Or in an MMO you can’t just wait around to raid only when you have full knowledge, but you have to get in there and make your mistakes those first few times. It is strange that these simple concepts become demons in a workplace.

jeremiah on application security spending

Jeremiah Grossman dives into the question, Why isn’t more money being spent on Application security when it is obviously important today?

During an event a panel of Gartner Analysts asked the audience what the best way is for organization to invest $1 million dollars in effort to reduce risk. The choices were Network, Host, or Application security… The audience selected Application security. However, the Gartner CSO (who took the role of CIO in the play) overruled the audiences’ decision. They instead selected Network security, while at the same time curiously agreeing that Application security would have been the better path. His rational was that that it is easier for him to show results to his CEO if he invests in the Network.

He has a point!

I also believe it has to do with visibility and knowledge. We’ve had networking and systems around for quite some time, and we’re getting better at operationally baking in and showing security. I don’t think we’re nearly as mature with application security. Unless someone codes, they really just don’t get it because it is hard to visualize and measure.

There is also an experience or knowledge gap where, again unless you’re a developer, you really can’t effectively explain or demonstrate security or how to code securely. I’ve seen “senior” developers who have zero thought about security other than on a most basic level (i.e. “sure we have admin and normal user types in the system…”).

The rest of Jeremiah’s article is also excellent reading. I love his point about the immediacy of results. That’s a frustrating business mindset for technical problem solvers.

Maybe that gets into the realm where the business needs to start working with IT, as opposed to *only* saying IT needs to align with business.

set ourselves up to blame others

Clouds. Ugh. I’m still trying to slowly make sense of what the cloud is, but it doesn’t help that pretty much everything is being rebranded as ‘cloud.’ Once upon a time I thought cloud computing was sort of like off-loading massive computing needs to someone else (a lot like SETI only more commercial, or maybe more like botnet time purchasing?), I now may think ‘cloud’ refers to anything you use that isn’t in your pocket or on your desk. So does this mean Web 2.0 is officially passe and ‘Cloud’ is the new Web 3.0?

Nonetheless, some thoughts which likely illustrate why I’m not getting it…

– If an enterprise isn’t doing their IT infrastructure correctly already, they alone can’t evaluate which cloud vendors *are* doing it correctly.

– Cloud vendors aren’t doing anything magical that makes them far better than your own infrastructure.

– And if the ‘cloud’ fucks up, you can just blame them, right?

– At least you can see into your own operations. You can’t see the cloud ops. And at least your operations can care about your business.

– Cloud companies want to make money too. Which means rather than paying contractors to make your solutions, you’re paying another enterprise to create your solutions. So, what are you really buying by probably spending more? (answer: experience and blame shift, and experience is often what enterprises are avoiding paying for in their own staff.)

– Cloud, in my view, yields value in: 1) experience through repeating solutions, 2) internal scalability through repeating solutions, 3) and internal efficiency through repeating solutions. If you can provide solution A for company Y, you should limit costs by basically providing solution A` to company C, right?

– Cloud is basically a new brand for the software market, the web market, or an IT data-churning service (B2B service?). Absolutely nothing new, so pick your poison.

– While basic computing needs for enterprises are very similar, it only takes a few weeks of work to make their environments terribly dissimilar. This digs at the value any repeat solutions will have for different businesses. Something the service industry has to deal with by stacking experience, rather than pre-packaged products. Any developer creating solutions for multiple businesses could attest to this, I’m sure.

– And if cloud is a service, then it will always be pressured to squeeze 10 clients into the space where 6 quality-driven clients would exist. (*wave to Jerry Maguire*)

wisdom from a hacker looking at 50

I missed G. Mark Hardy’s talk at Defcon titled “A Hacker Looks at 50,” but I always earmarked it to check it out. I’m glad I did since he has a lot of great wisdom to share. I wanted to yoink his main slide bullet points just to reinforce it to myself. His talk is available online (mp4). Here are G Mark’s Observations on Life:

  • Just ask.
  • Don’t wait for perfection.
  • Become a master.
  • Vision is everything.
  • Never disqualify yourself.
  • Challenge your limitations.
  • Have a vision. Write it down.
  • Speak every chance you get.
  • Don’t go it alone.
  • Be flexible.
  • Aim high.
  • Be PASSIONATE.
  • Beware of bright shiny objects.
  • Choose tech or management.
  • Do something bigger than yourself.
  • Recipe for life:
    • vision
    • plan (take control back, take a break in the woods)
    • take risk (you can always go back)
    • stay focused (TTL)
    • determination (how badly do you want it?)
  • Don’t save your best for last.
  • Be generous now. (Our stuff doesn’t follow us.)
  • Enjoy life.