coming soon: discussions on ips and siem

Coming soon are a series of blog posts from 2 sources that, at least to me, sound like they may answer similar high-level questions despite focusing on disparate technologies. Securosis will be posting about SIEM replacements and Bejtlich will be posting about IDS/IPS. I’m looking forward to views on both, and I think they may delve into similar sentiments.

Bejtlich basically framed his prologue around a tiny article about a cybersecurity pilot: “During an address to the 2011 DISA Customer and Industry Forum in Baltimore, Md., [Deputy Defense Secretary William] Lynn said the sharing of malicious code signatures gathered through intelligence efforts to pilot participants has already stopped ‘hundreds of intrusions.'”

First of all, duh. Second, this isn’t about IPS technology or any technology at all, really. This gets back to what I feel are three *very* important resources in security: people, time, and information sharing. I’d argue if *any* business had this sort of ability, they’d see value as well and we’d all issue a great big, “duh.” Third, the world Lynn is talking about is definitely different from my day-to-day; the concept of security intelligence efforts in any but the biggest private enterprises is a foreign concept, but I can fantasize at least! πŸ™‚ [Aside: I’d include ‘organizational buy-in to security’ as another valuable resource that defense organizations have a big interest in; but that concept gets pretty abstract and overly broad. Essentially, if security sees a problem, they don’t get trumped by the business…every single time.]

Bejtlich posed the underlying rhetorical question: “If you can detect it, why can’t you prevent it?” Sounds quaint, eh? And it’s a valid question, though the problem is in my years of watching an IPS/IDS, they’re far, far too chatty to feel good about outright blocking all but the absolutely most obvious stuff. But that gets better if you put the magic ingredients of people, time, and info sharing into it (as well as visibility and power over the damned signatures!). Out of the box, no IDS/IPS is going to be a fun experience from any perspective that includes operational availability.

At the end of the day, I still feel like so many discussions come back to whether someone is looking for absolute security or incremental while accepting that our equilibrium will be in a balance between security and insecurity.

I might even entertain the discussion that metrics are actually the *wrong* way to go, since I don’t think there is an answer. And security can’t be nicely modeled without peoplethought and qualitative statements….

incomplete thought: less integration, more security value

I’ve been mentally writing and rewriting a post about SIEM and IPS and spending time on tuning alarms, but just don’t really have a ton to say that’s new. Then I posted (minutes ago) about how we can’t have nice things…. It got my wheels turning…

One point the author makes is, “[solutions] tend to require a bunch of integration work…” Well, that’s sadly true; every enterprise vendor customer wants something different, some checkbox or some strange integration. The problem is how the vendors will often satisfy the need, but then insist on using that as an excuse to include the feature in the base product for everyone. This bloats products, making them difficult and confusing to use. The age old, “we’ll get customer Y to fund this new idea which we’ll then resell over and over after.”

I also believe it leads to dumber products and large blindspots, especially in security products that lose sight of answering the core security questions, “What actually gives me security value?” “What value does X give me?” It’s hard for a vendor to globally answer those, so it’s nice to let customers actually put in their own work on the tool, rather than automate everything and make it ramen-noodle-bland. Instead, vendors seem to be answering, “What would you like in the tool,” without referencing back to the core questions.

Getting back to SIEM and a concrete example, it’s a frustrating time trying to tune alarms down to a level where I’m not inundated by thousands of “usually nothing” alarms and not cutting such large swaths of blindness that a truck can drive on into my network. All while working within the sometimes awful boundaries of the tools at hand. I’m often mentally lamenting not being able to parse the logs myself!

Spend enough time with a SIEM, and you start to realize it’s not very good from a security perspective except in hindsight (investigation and forensics) and centralized log gathering. Kinda like DLP, it takes hands-on time to get past marketing positioning and actually figure out for yourself what the real value is going to be. There are better detection mechanisms than SIEM alone. (If your SIEM alerts on an event your better detection tools shovel to it, why aren’t you alerting from the first tool? The tuning will be better.) [Assertions like these are why this is incomplete…]

I’m sure there’s marketing in there, and maybe this is a long-term vs short-term marketing problem where you want a tool to sellsellsell rather than be a narrow-focus, useful, and long-term successful tool like an nmap or nessus or something; your tool just *is* useful rather than superficially forcing it.

This might be one of the underlying and subtle problems of a compliance-driven industry, unfortunately. Certainly not a nail in the coffin of compliance, but definitely a problem.

your ceo thinks you don’t let him have nice things

Also via Twitter last night, I saw the article, This is why we can’t have nice things (in government). The article is short, and while targeting Canadian government, it mixes subjects by bouncing between “enterprise” and “government” technology, which I think are two different beasts.

But the point holds up either way: new consumerland happy creative tech is *not* necessarily easy to apply to enterprise needs.

This brings up the question: Which side should give ground here, the enterprise with its rigid needs and bureaucracy and efficiency/scale, or creative solutions by smaller creators (I’m hesitating finding an appropriate word there)?

My brain wants to side with enterprise, because the cost of supporting and cleaning up messes from self-imposed inefficient tech is grossly misunderstood outside IT (and accounting). But my gut really wants to side with the creative and (possibly) useful tech that abounds in the world today. You can probably do some really awesome things and get some excellent things done when embracing newest things.

From a security standpoint, it’s not as clear either, once you dive in. If a company of a few hundred people embraced new tech and allows consumer devices and such, does that put them at more risk? Probably. But do they *realize* more security incidents? I’d *guess* not, but only largely because this new tech is new to attackers as well! Attackers don’t have efficient attacks and may not understand it either. I’d say if anything increases, it would be accidental or opportunistic issues, or perhaps blended ones like when a SaaS provider on the Internet cloud gets their database popped and accounts divulged which are the same passwords your CEO uses on his Gmail account that also controls his Android device…

In the end, I consider this a good thought-scenario exercise. People who are bleeding edge on tech will learn things that tech teams in tech from 5 years ago never will learn, and vice versa, even.

For the record, this little internal warzone of enterprise vs consumer vs bleeding edge is, in my opinion, a healthy state to be in. Being in security isn’t about being paranoid about authority, but rather being in a state where you question and challenge everything (which roughly aligns with traditional definitions of “hacker.”)

The again, this article may just be a disgruntled developer whose “brilliant” ideas just aren’t being realized by the “dumb” masses… (The author also makes quite a few assumptions here, so it really does read a bit disgruntled, but the points end up being poignant!)

to do something good, you first have to do it bad

I can see why Twitter challenges and even betters blogs, as I see far more interesting and new stuff than I normally would with just an RSS reader as people I barely know retweet links from people I’d never know. This short article flew by this morning: “To write good code, you sometimes have to write bad code”.

I don’t even need to quote anything there, and if I had to make a change, I would remove, “sometimes.” This applies not only to code, and performance, and security, but to life in general. Taking some risks and being wrong is one of those weaknesses I struggle with regularly. Just have to keep saying: doing and being wrong is better than not doing at all. And that’s true pretty much every time I make a plunge. Sure I might get my hand slapped and I might even get egg on my face or skin a knee, but (and I have this up on my board at work): “A calm sea does not make a skilled sailor.”

There are so many little idioms I’ve stuck to me like velcro balls in a Double Dare physical challenge, like how we learn the most when shit hits the fan, growth through adversity, and so on.

For the article, I don’t think you *can* write good (and secure) code without first writing and learning from bad code. The problem is so many people in [web/mobile] development jobs only have homegrown knowledge and end up learning on the fly with production-level apps. We’re still in a relative infancy with computer programming (or higher level languages which change every 5-8 years like tech fads).

asking attackers for constructive solutions

I read nCircle’s Andrew Storms’ blog post, “Rethinking Black Hat: Building, Rather Than Breaking, Security,” and felt like joining the discussion. Essentially, Andrew is saying:

Think back to the [Black Hat] talks you attended and ask yourself how many of them promoted constructive ideas? I’m glad to know that just about every mobile device platform is broken at some level. It’s no big surprise that there are problems with crypto, networking, every OS and even the smart grid…

But let’s push ourselves to take that extra step forward and think about how we can also fix what’s broke. Wouldn’t it be interesting if future Black Hat briefings also had to include one or more ideas on how to fix the root of the problems being shown?

I’m not sure I agree with this, on a few levels.

First, the big one: Playing defense is draining. Playing defense involves policies, processes, politicking, covering all angles, and essentially playing a much longer-term game than an attacker. This is draining and timesoul-consuming. While I wouldn’t say offense and defense should be divorced with a hard line in the middle, I totally understand when an attacker can point out a weakness but himself has a weakness in effectively describing how to do proper defense against same attack. I get it, totally.

Second, the media coverage of problems is a huge driver. It’s true, the regular ol’ media picks up on the sensational moments where XYZ are broken, and that gets eyeballs. However, solution ABC gets next to nothing because, well, it’s boring. Which one is going to have a chance to drive attention, budget, action, and awareness? Including outside the hardcore geek circles. I’d argue that if solutions were so interesting, they’d have been done in many of these products and technologies and developments from the start. Doing things securely is still (and I’d argue always will be) an afterthought, so poking out insecurity in a sensational way is a state of normalcy, to me.

Third, look out for post-con highs (or lows, in the case of security!). It’s great to come out of a con-type of gathering encensed with all sorts of great ideas. For hacking cons, it’s easy to come out of them feeling like everything is fucked. I guess I look at that as a sort of healthy state of things. Insecurity isn’t going away. Even the lockpick industry doesn’t try to make unbreakable locks (ok, minus marketing spiels and executive dreams), but instead try to increase the time-to-pick metrics. Andrew certainly knows this, so isn’t much of a point for me.

I really do get Andrew’s point, and I would even agree for the most part that it would be nice if attackers also offered constructive information on how to do things better, but I don’t think I’d ever actually call anything out for it and even voice that concern much at all, for fear of devaluing upsetting the current equilibrium between offense and defense. Granted, there are counter-point to my points, certainly…I may be playing a bit of a devil’s advocate here. πŸ™‚

As a last point that I even hesitate to bring up, but really have to since it’s like a little itch poking at the back of my brain on this topic, I would not want to stifle the exposure of problems under the heavy foot of, “be constructive.”

There are 2 scenarios in mind for this:

Situation 1
Employee: “Hey boss, I see a problem with this application here where it doesn’t validate people properly.”
Boss: “That’s nice. It’s now your problem to fix, go to it!”
Employee: *sigh* “…next time I’ll just shut up.”

Situation 2
Employee: “Hey, your application doesn’t validate people properly. I can break it by doing blah.”
Developer: “It’s fine. Prove to me you can do that and that that is bad.”
Employee: *sigh* “…next time I’ll just dump this to full-disclosure and let you handle your own research.”

In either case, our approach to insecurity or issues can have a huge impact on how researchers (or those who point out problems) may become dis-incented to say anything at all. I agree when a boss wants optimism and solutions, but I disagree when said boss dismisses an issue when the messenger has no solution of his own.

(There’s a sub-point in here somewhere about a non-expert consuming information about how technology X is broken, and then wanting the solutions handed out to them when maybe they’re not the appropriate audience or consumer of such information. Sadly, I don’t know how to articulate that on short notice without offending or being extremely confusing… For instance, I might hear that CDMA is broken, and I might decry that the presenter should give solutions, when I only want that because *I* don’t have the solutions either…)

keep it simple, infosec…

Since I saw this site for the first time, I glanced at a few articles on the MyInfoSecJob.com site, particular the security challenges. Reading the comments (i.e. solutions) for the first challenge, this pretty succinctly illustrates why infosec is so frustrating for business and IT persons! The range of answers is phenomenal, from simple to complex to flat-out suggesting complex setups with specific hardwired vendors and various other things.

I don’t think the answer to any, “help me secure this,” challenge should be to grab your favorite 600 page IT security book and thump it on the desk like you’re some pimp on Exotic Liability flopping your meat on the table. Keep it simple, and keep it on task with the information presented. Nothing in a network/data diagram really begs for a sermon about file permissions, and OS patching, and extraneous complexity for what is obviously a small shop. If you want to get further down that road, you can’t do so intelligently without more information. You’re just going to lose your audience (or demonstrate your lack of experience when suggesting over-the-top recommendations or flatly inappropriate ones…).

Anyway, based on that security challenge, these would be my simple recommendations:

– Replace the hub with a managed switch, assuming that is the basis of the underlying network connecting the users, the servers, and the router together. That’s the one real question the diagram makes me ask, “Is the hub separate, or is that what the blue ethernet network bar is supposed to be? You can pick up a soho one if you want for $100, or drop a grand or two for an enterprise level one.

– Drop in a firewall/VPN hardware device behind the router (i.e. between the internal network and the router). Configure this to position the web server into a one-armed DMZ, and set up necessary firewall rules to allow the access shown as needed on the diagram. Configure the VPN so external people can log into it and get to the fileserver as needed. Get a decent enough one that you can budget for; the features and support will be worth it. As a bonus, make sure VPN users are in their own subnet and even segment off the fileserver to its own, and configure firewall rules as necessary for everyone’s access. In the absence of other technologies, at least losing one part to an incident won’t caused the rest to be suspect; at least not by default. At worst, grab an old PC and figure out a tool like Untangle or IPCop…

This leaves open questions, but they’re questions that require further dialogue with the client.

reasons to not get into infosec, sort of

Reasons to not work in information security? Oh yeah, we have those lists!

(Side note, the article drew my attention not because of this list, but because of the awful, awul title: “6 Reasons Why You Should NOT Work With Information Security.” I read that as why business/people should not cooperate with infosec. Pesky prepositions!)

6 – Working long hours, forever. Truth! Then again, this can be said of almost any professional-type of career. But, that doesn’t mean we can’t enjoy our jobs and unplug and balance life at the same time. It is also part of our duty to educate our own industry so that we’re not guaranteeing prevention of security incidents. It really is unreasonable to think we must toil away until our network is impenetrable, just as much as making a developer work until a major application is written entirely and perfectly…

5 – People only remember of you when things go wrong. Well, yes, this is unfortunately true. It’s going to be a continued part of our job to stress that prevention is not a guarantee, but we can certainly help the odds. This is also a truism in many professions, though, especially in IT or any utility; you get praised when shit hits the fan and you fix it, not when you do good things when no one is crying. (/hyperbole) In the end, I think most infosec geeks I know understand this, and it doesn’t really bother them.

4 – Study, study and more study. Yes, true, and every infosec geek I know loves this! I’m not sure I’ve met any that have been forced to study up on something they hate…with the exception of having to study up on something they already know just because someone wants proof or metrics or won’t just take your word for it (a whole other discussion there!).

3 – There is a limit for growth to your career. I think a certain Lee and Mike would disagree. Still, there is a bit of truth in the article’s assertion that many CEO levels have had blessed roads, or those that flow through the sales silos. Then again, that C-level/executive/managerial is the goal of all people is a poor assumption to make… (about as bad as assuming the road of a C-level is…)

2 – No room for mistakes. Refer back to #5: this is true, but is part of the education process to make sure people know that prevention is not a guarantee. I know, I know, so many people in business dislike even a single mistake, but try to bring this back around to accounting, finance, and strategic management: there is always a balance of risk, cost, and revenue; even if the rank and file think every mistake needs to result in an upheaval of processes and people (really, that’s a result of too much middle management and unhealthy performance pressure…).

1 – People expect you to crack their exes Gmail passwords, wireless networks, and combination locks. Again, true, and this is bad why? Also, again this is part of being in a professional career. Personally, I do enjoy the “mystique” that still hangs over technical mastery and hacking, and feel complimented when someone asks things like this of me; either I oblige when ethical, or educate when unreasonable.

a look into the world of web tracking

Privacy is as big an issue today as its ever been in the online world, with Google and Google+ (the behemoth of data mining), real name usage online, and even bills in the US government to require ISPs to track web usage. If you’ve ever wondered about how web sites track you online, even when you think you’re being private, check out the Wired article, “Researchers Expose Cunning Online Tracking Service That Can’t Be Dodged.” Especially check out the embedded link to KISSmetrics’ details. This sort of detail is sobering and annoying at the same time.

On a side note, I dislike seeing discoveries like this made by academic researchers, in a way. I really wish more corporate security staffs would uncover things like this, or even leisure/hobby groups. It sort of suggests that corporate security staffs just don’t have the time to do much more than get by, while academic researchers (who are often on the fringe of practical reality) have plenty of free time to…well…research. And that sucks, since you’d certainly see things like the KISSmetrics scripts i.js and t.js run through regularly, and you can likely even stop that on your corporate “borders” with zero issues.

Maybe staffs do find these things, but we certainly don’t have a place to air such discoveries and likely the rest of us have way too many other things to think about or look at on any given week…

I guess that’s part of what makes security exciting. πŸ™‚

reason #4 why infosec and journalism don’t usually mix

What gets more eyeballs to a popularity-driven “normal” news site? Is it talking about positive leaders and ideas in digital security, or it is piling onto infosec train wrecks like Greg Evans? It’s one thing to out charlatans in our industry and make sure their profile and truth get out (many examples of this, including podcast spots), but it’s entirely another to post useless articles with no value just because the guy’s name gets you 12 retweet mentions.

It’s this sort of train wreck journalism that turns me off to most mainstream services (and masses of people in general), where it’s more popular to talk about useless drama than it is positive leaders and examples of things that go right.

big brother watches, and even talks to you

The pressure on companies like Google to comply and assist law enforcement and government “investigations” must certainly be immense. Think about this article on Krebs’ site, “Google: Your Computer Appears to Be Infected,” particularly this:

…Google is placing a prominent notification at the top of victims’ Google search results; it includes links to resources to help remove the infection.

I wonder when the first subpoena will be issued via a Google- or FaceBook-delivered popup when that user is logged in?

Scary or helpful?

incomplete: 2011: the year of the return of hacktivism?

(This is unfinished, and I’m not sure I ever will finish it. This illustrates simply a bit of for-discussion points. While I take the tone that this year’s hacktivism surge isn’t much of a “real” surge at all, it’s still hard to argue that we’ve had a larger-than-usual number of high profile and large targets being successfully attacked in a relatively short amount of time.)

Has this year heralded a new trend in rising hacktivism (i.e. the actions of Anonymous, LulzSec, WikiLeaks informers, and various offshoots surrounding those group)? Maybe. Let’s look at a few things that we pretty much know for sure. I’ll lump most of this year’s activities under the term, “hack attacks.” (I’m not including things like RSA, which are almost certainly of a more nation-state-like nature.)

1. Code hasn’t suddenly gotten more insecure. It’s not like we had a period of time where code was secure and the ball has since been dropped. I’d argue that all of these issues have always been present. Granted, the landscape is changing, and web security hasn’t kept pace at all, from both the server-side (code, OS, practices) as well as the client-side (browser), but it’s not like things have gotten worse; they’re just not getting any better.

2. These hack attacks are not demonstrating some newfound body of knowledge that attackers have gained. In fact, most (if not all) of these attacks are relatively simple and not new at all. These attacks aren’t dropping 0days; they’re poking at very poorly secured web sites.

Wait, this sort of sounds like I’m about to say nothing has changed. Alas, clearly something has changed…

3. Breach disclosure laws. Sure, I still believe the whole “breach disclosure” issue is like the proverbial iceberg: you get a certain number of visible, announced breaches, but I imagine there is a much larger mass of them hidden under the surface either not detected or simply buried in corporate bureaucracy and dishonesty. Still, you have to admit there are far more announcements today than 10 years ago that are prompted by law. I still don’t believe this means there are more breaches; we just hear about more of them.

4. The media is ready for geek news (and the never-ending ‘reality show’ drama of security that has been thwarted, or conversely someone who has failed). Ten years ago, the media couldn’t give two rips about digital security and the breaches suffered by it. Today, it seems like the media is more comfortable being a bit more technology-focused. And hacktivists seem quite happy to feed this new trend in media coverage, while at the same time feeding off the attention. (Incidentally, I’d say many “older” hackers don’t give a rip about attention, at least not from the secular world. One might consider more mainstream attention desires to be somewhat immature.)

5. More financial transactions are done online. I don’t have numbers, but I’d expect there are more mainstream consumers and many more businesses today who perform financial transactions online than there were even 5 years ago. This means more opportunity for attackers to usurp the process (banking trojans) as well as much more data stored in databases behind public and poorly secure web sites.

My opinion really comes from the above. I don’t think there is a huge difference this year from the past 10 years, except in media coverage and the risk footprint of more financial information online. I think there have always been hacker groups and hacktivist activities; we just normally didn’t hear too much about them in the past.

Here are some things that may be red herrings in this discussion:

– The rise of social media is not significant for the geek crowd. Sure, there may be new faces in the newest generation who are growing up with “social networking,” but for the actual hacker/security geek, the social network has been around for decades, from BBS’s, to IRC, to web forums. The problem with social networks are their desire to data mine and their lack of trustworthiness, which will erode any malicious hacker’s efforts to remain hidden. I might entertain an argument that current “hacker” entrepreneurs are encouraging, by their success, younger coding-friendly geeks to be more bold, since it clearly paid off for them, but that’s more of a socio-cultural thing…

I might also entertain the idea that more mainstream people are online due to social networks, which then acts as their gateway to more “hacker-appropriate” networking online. I’d maybe even argue that much of today’s hacktivism is done by these newer members wielding non-novel attacks. This sort of rise and fall in the pursuit of notoriety has gone on for decades in the hacker underground. It usually wanes as these “newbies” grow up and actually start having bills they need to pay (and either turn into for-profit criminals or move on to real jobs) or have a law enforcement scare.

– Anything to do with the criminal world, or even “APT,” i.e. nation-state-sponsored digital espionage. There is little argument that recent coverage of these hacktivist attacks and plunders has driven at least some interest in security, and if nothing else is exposing the risk and/or insecure state of things. No criminal or espionage threat agent will like that attention. It hurts their chances of success, of being undetected, and increases the chance of “make an example of” penalties if caught. The for-profit crime and APT trends have been around for several years now, and are not new this year.

– I hate to go here, but I’d almost certainly leave discussions of PCI and compliance at the door. I believe compliance (read this as “PCI” even if I just generalize the term) does improve certain things, but I also believe it erodes other things. This is a discussion tangent big enough for its own treatment, but I think the net gain compliance has on a discussion about this year’s hacktivism is zero.

the bad news of no unemployment

Via Twitter from GovInfoSecurity came a link to an article titled, “The Bad News of No Unemployment.” This certainly is a problem that I’ve seen personally, where someone in a “security” position (whether it be contract, consulting, advisory, or full employee) really doesn’t know what the fuck they’re talking about. Either they can’t walk the walk (whether that be security testing or walking in the shoes of ops/devs) or they absolutely fail the talk (when their advice sucks and they clearly don’t have much real knowledge beyond a few boilerplate topic responses).

Do the industry some favors. Hire only people who have real talent; filling a position with an assclown is a disservice to your business and the industry. Expose those who do not. Don’t support those who are doing us all a disservice, or do help them to get out of that doghouse by imparting real wisdom, advice, and assistance. I really believe this also includes informal, non-paid assistance to non-sec managers who just need 30 minutes of lunch talk to get a better idea on how to evaluate security vendors/candidates/services.

work in a way that matches your personal values

I am by no means well-read on books about business and leadership and entrepreneurship and biographies of successful people. But I do quietly collect casual information from news articles and other sources online. To me, one of the “work” rules in recent decades has been to essentially be yourself. Wear what you want to wear (within respectiful, credible reason), work in a way that aligns with your personal values, be human; essentially not to have a work persona separate from your home persona. Work in a way that matches your personal values, your character.

I’d agree with this. I think that sort of feeling aligns with why I really respect people who “geek out” about their chosen field, even if they come across as strange or even obsessive. Give me a security geek who lives and breathes the industry during work and play over a 9-to-5-only sort of mentality.I understand work-life balance issues, but I also understand happiness and enthusiasm as well.

the futility of chasing cheating

Saw a link pass through Twitter to a blog post, “Why I will never pursue cheating again”. This is a quick read that hits the following points:

  • catching policy violations violators (the human problem)
  • “us against them” environment
  • reflection on customer evaluations (managerial conditioning)
  • rechanneling activities and interaction

Unless your job is specifically about finding corporate security policy violators, no one ever truly does it, until such violation has a tangible negative effect to the business (or *not* reporting it has consequences, like a mantrap that locks up if two people go at once). And it doesn’t take a genius to see how this makes digital security difficult, regardless whether you believe in tackling the human or technological problems in security.

simple isn’t simple

(That’s the best title for this; afterspending a few minutes staring slack-jawed for a better idea, I figured I’d just steal the title.) Rich Mogull wrote a DarkReading piece, “Simple Isn’t Simple,” (and companion Securosis mention) that I think everyone should read. This part stood out to me:

We security pundits, researchers, and vendors tend to forget how hard real-world operational IT is. …I am saying we need to recognize that it’s hard at all levels. That even the easy parts are nearly universally difficult in practice.

This is one reason I often sacrastically wonder aloud whether some people who talk security only do security in their one-room office with 2 computers and an all-in-one fax/printer and ActionTec DSL/wireless router. Likewise, I appreciate anyone who understands which “easy” best practices are, in *real* practice, really difficult for various reasons. (Dare I say that a little empathy goes a *long* way to helping embattled security managers/analysts? And please don’t tell our execs what we should be doing in a tone that makes it sound easy!)


This is also one place where it probably sucks to be a QSA. You have one-liner PCI requirements and practices on one side, and a real world operation on the other. And then you tell them they need to rearchitect their network/systems…at what cost again? And it needs to be done before the next audit? Does the QSA know all the little things operations does to hide or cover issues when backs are against a wall? Maybe we should talk incremental improvements? (If Josh Corman needs more ammunition in his PCI debates, talk about going from no security to full security in one audit cycle and how healthy that will turn out to be! Hey, it might work…)


The only real easy part is when you get to buy a piece of technology and slap it in and it just works, say a card-activated magnetic door lock. But so much in security requires process (maintaining access, following up on logs, checking out anamolies, verifying proper working order, maintenance, safety…), which isn’t going to be very easy any time soon and is excrutiatingly hard to measure.