zeltser’s tips on detecting web compromise

Lenny Zeltser goes over 8 tips for detecting website compromise. There are far too few security writers who have technical chops enough to not stand out amongst a pack of geeks at a security conference, but Lenny is one of the better ones, so any criticism I have for his writing is done very lovingly. Snuggly-like, even. I figured I could make mention of this nice article and give my own reactions.

Lenny starts out with 3 overly obtuse best practices. Sort of like saying, “If you want to get into shape, you should run more,” which *sounds* easy. I think he takes these items like I do: you’d be remiss if you left them out, but can’t do them justice by being less obtuse about it.

1. Deploy a host-level intrusion detection and/or a file integrity monitoring utility on the servers. – This biggest problem with this (collective) item is how much activity this is going to generate for an analyst. And if you do tune it down to a quiet level, I’d argue that you’re not going to see what you want. But, as said already, a necessary evil to a security posture, whether you like it or not. At a bare minimum, you should know every time a file is changed and a new file appears in your web root (with exceptions for bad apps that need to write temp files and other crap to itself- the bane of web root FIM…).

2. Pay attention to network traffic anomalies in activities originating from the Internet as well as in Internet-bound connections. – This part, “Internet-bound connections,” really needs to be implemented as soon as possible before an organization has so much going on that you can’t ever close it down without preparing everyone for the inevitable breaking of things no one knew were needed. Watching traffic coming in? Well, not so easy, and you’ll probably just end up looking for stupidly large amounts of traffic (which may be normal if you service a large client sitting all behind 2 proxy IPs) or the most obvious Slapper/Slammer IDS alerts. But you absolutely want to know that you just had 500 MB of traffic exfiltrate from your web/database server to some destination in the Ukraine, and it contained a tripwire entry in an otherwise unused database table. I would drop the buzzword, “app firewall,” in this item as well. You should also know what is normal coming out of your web farm (DMZ), and anything hitting strange deny rules on the network should be checked into.

3. Centrally collect and examine security logs from systems, network devices and applications. – Collect: Yes. Examine: Yes, with managed expectations. I really want to say yes, but having done some of this, 99% of the examined stuff that bubbles up to the top is still not valuable. There’s a reason I think gut feelings from admins catch lots of incidents and strangeness on a system/network, and it’s not because it shows up clearly in logs. Especially with application logs, if you want them to be trim and tidy, you’re looking at a custom solution which includes custom manhours, overhead, and future support resources.

Side note on custom app logs: If you can, log/alert any time a security mechanism routine gets used, for instance if someone attempts to put a ‘ into a search field (WAF).

4. Use local tools to scan web server’s contents for risky contents. – Certainly should do this as much as you can. You can scan for even deeper things like any new file at all (depending on the level of activity normally on your web root) or files created by/owned by processes you don’t expect to be writing files.

5. Pay attention to the web server’s configuration to identify directives that adversely affect the site’s visitors. – This can be a bit easier in Apache or even IIS 7.0+, but some web servers like IIS 6 are hideous for watching configurations. Yet, definitely keep this in mind. Thankfully, attackers have extra hurdles than just writing a file in the web root, when it comes to such servers.

6. Use remote scanners to identify the presence of malicious code on your website. – I agree with the spirit of this, but I think internal scanners need to be done as well. If a bad file is present but not discoverable from an existing link on your site or easily-guessed name, then an external scan will totally miss it.

I should take a moment to stress that many of these items include signatures or looking for known badness, but none of that should replace actually looking, at some point, at every page you have for things that are clearly bad to a human, such as a defacement calling card or something.

7. Keep an eye on blacklists of known malicious or compromised hosts in case your website appears there. – It’s hard to say this shouldn’t be done, but it certainly offers little return on the investment of time. If you *do* happen to show up before any other alarms sound, then clearly it is nice.

8. Pay attention to reports submitted to you by your users and visitors. – I’d personally overlook this item 9 times out of 10, but it is really oh so necessary.

If I wanted to add an item or two:

9. Scorched earth. – While not a detection mechanism, you do run certain risks if your web code sits out on the Internet for a long period of time and grows stale. You should refresh from a known good source as often as you think you need to. This can include server configs and the like. (For instance, I roll out web root files every x minutes on my web servers at work, and I regularly wipe out web server configs and rebuild them via automation.) Don’t be that company that had a backdoor gaping open to the world for 2 years, when a simple code refresh would have closed the hole. A diff comparison mechanism may satisfy the ‘detection’ criteria of this list.

excellent diginotar incident summary over at isc

Swa Frantzen (ISC) has a great discussion of recent DigiNotar drama going on. I do take minor exception to this statement:

I for one would love to know who that external auditor was that missed defaced pages on a CA’s portal, that missed at least one issued fraudulent certificate to an entity that’s not a customer, and what other CAs and/or RAs they audit as those would all loose my trust to some varying degree. This is not intended to publicly humiliate the auditor, but much more a matter of getting confidence back into the system. So a compromise that an unnamed auditor working for well known audit company X is now not an auditor anymore due to this incident is maybe a good start.

I totally understand this sentiment, and actually do agree with it. But we do have to be careful that we don’t set every single security auditor/expert up for failure, where one mistake causes the hammer to drop. (Speaking of elephants in rooms, the seeking or assumption of perfection is a ‘subtle’ one…)

Granted, repeatedly missing defaced pages hits the facepalm category, but I think this oversight (from tripwires on attacks to page inventory reviews to edit/ownership times to web app sec checks, etc) can happen to literally every organization if they’re not rigorous in their testing, though it still comes down to knowing what is valuable in the eyes of a threat, and being extra careful around those processes (i.e. issuing a trusted certificate!).

Sitting back and pondering this scenario while nursing some scotch illustrates all sorts of things that are wrong with the security mindset in our world, ya know? Maybe “wrong” is a bad word for it, but rather the challenges we face and will eternally face, as a function of reality.

some general thoughts about blogging

McKeay wrote a great post about blogging this past weekend, and I think any security blogger should check it out. I really like his subpoints about blogging and working and balancing both:

I’ve learned a number of lessons about blogging the hard way. I’ve learned that no matter what I think I’m writing, what’s important is how other people are reading it… I’ve realized that people are reading and judging what I write, for good and for ill. And when I write something people read, it can get back to my employer.

I also like this:

More often than not, my employers have maintained an air of benevolent ignorance towards my blog, but every so often I’ve gotten the “we’ve read your blog and are not happy” conversation. Not often, but it has happened and it’s never comfortable talk. I’ve actually told at least one manager that my blog and podcast are more important to me than my job.

For me certainly, blogging is a personal thing, a way to organize my own thoughts, record something for the future, or vent a little bit. It’s also a way to dive deeper into what would always be a hobby for me, even if not a job. Even if I didn’t have a single reader my blogging habits wouldn’t change a bit.

Anyway, here are some points of my own that I try to follow.

1. Separate work from personal if you need to. This is a big deal in the past 5 years, where work and play time are blending together, largely because something you “say” (digitally) during your personal time can now easily persist for years for people at work to discover. Things you could say with buddies or at a bar don’t just stay with buddies or at the bar in a single point in time. Therefore, with blogging especially, I try to keep work separate. I don’t hide my identity on here, but likewise I don’t advertise my blog to work colleagues (they can easily find it on their own if they want) and I don’t mention my employer anywhere on here. I also leave deep personal things aside, though some incidents/anecdotes if read by the right people, would recognize themselves in them, but I also try to make sure they’re generic enough and have enough of a point to not be uncomfortable. Besides, if I piss someone off, I hope they have my own viewpoint and just move on with life. It’s a big deal to be able to agree to disagree; a very useful skill. I like dark grey cars and don’t like white cars. You might not agree. And it would be silly to get pissed about that. Same goes for what I post on my blog or elsewhere on the Internet with my screenname.

I admit, my hard divide between work and personal is slowly going away, in part to my next point, but also partly because security work and play is a career goal.

2. Don’t present false faces. I don’t like when people “front,” or present themselves in a way that isn’t in line with who they really are. Life is way too short and precious to not be yourself in anything you do, and if being yourself gets in the way, make changes to be someone better. In that regard, I don’t typically pick my words carefully on my blog; if I have an opinion, I’ll be out with it. (Though it does help that I’m an easy-going kind of guy anyway…and this is also easy to say for someone who thinks of himself as a very decent guy who is sympathetic to objectivist beliefs…)

3. It’s easy to apologize or admit to being wrong. I don’t mean this to sound like a copout for bad behavior, but it is easy to apologize for or admit to being wrong. I find it’s more important to put your opinions out there and be contritely wrong, than to bottle everything up and stew. And this is a tough thing for an INFP to say! (And it’s something my risk-averse nature will always fight with me about.) Granted, that doesn’t mean you can be an asshole and then be contrite about it and things are fine…be reasonable!

4. Remember the important things in security: integrity and privacy. This also applies to IT work in general. Typically we are in positions to know very deep secrets and have access (or get access) to very sensitive things. The same principles that prevent me from perusing my CEOs mailbox are the same that dictate what I divulge on a blog, or anywhere on the Internet. Hopefully most people in white hat security are at least aware of these principles in every facet of their lives.

your ca is now untrusted, and hacker calling cards

In DR/BCP, we plan for natural events beyond our control all the time. But what about cyber events that are beyond our control? For instance, if a certificate authority makes a high-enough-profile mistake in issuing a fraudulent certificate, which then causes browsers to automatically update their software (and your users) to no longer trust any certs issued by that CA? Oh, and what if you use that CA for your shit? A situation beyond your control just gut-punched you.

For more information on the DigiNotar incident(s), F-Secure has a great post about it. Pretty lame to have your pants yanked down, then find out they’ve been yanked down several times in the past, and even though you told people you pulled them back up, you actually didn’t, and still had them down. GG for hacker calling cards. 🙂

a roleplay exercise based on rsa example

More information about the RSA hack has been uncovered. In the article, I especially liked this:

The email attack is not particularly complex, F-Secure says. “In fact, it’s very simple. However, the exploit inside Excel was a zero-day at the time, and RSA could not have protected against it by patching their systems.”

This should be a classic scenario for role-playing in any security operation. The first question from any manager: “What do we do to prevent, detect, or mitigate this?”

aaron barr, defcon, and anonymity

Really excellent article on ThreatPost from Aaron Barr: “Five Questions About Aaron Barr’s DEFCON (by Aaron Barr).” I must say, it is very well-written and he’s definitely got a brain in his head, but it’s nice to see him in and amongst people of the sort that attend Defcon (not that we’re that much different these days than any other group) and hear him talk to and learn more about the more greyish side of Anonymous and security and people in general, rather than just Washington boy clubs. His tentative behavior at Defcon is a bit amusing.

As many commentors pick up on, I don’t necessarily agree with his views in question 4 about anonymity, but I think he does a great job in illustrating the two sides of the problem: freedom vs criminal intent. While I may disagree, that doesn’t mean I have a better answer or argument to spit out. I think he and I would simply differ on our acceptable middle ground; where he’d prefer less anonymity, I’d prefer more, despite wide agreement on discussion points.

I like some points in #5 as well, as I really don’t think it is possible to have a better Anonymous. Wouldn’t that like asking for a better 4chan? The very concept steals away what they are, which is unfortunate. It is quite possible that Anonymous is a great idea, but is actually corrupted by that very anonymity and decentralized leadership. On the other hand, I do think we need the sort of greyish societal function that Anonymous fills. The function is important, even if the group itself fades into childishness. It’s kind of like making a statement through graffiti, but eventually losing sight of the point and instead just throwing graffiti everywhere no matter how dangerous (Stop signs?) or silly, just because you can.

Then again, I wonder how many activist groups like this ever *don’t* fall into that problem of slope-slipping? It’s probably more pronounced when you talk about less personal accountability… Still, this does happen with protests where sheer numbers help promote anonymity, or masks/hoods, or something.

The one item I thought Barr would bring up in point 5, but doesn’t, though, was how the efforts of Anonymous to poke at poor security may in fact give fuel to world/national leaders to reduce internet anonymity. Sort of like a child protesting his being grounded…ends up being grounded for longer with even worse punishments.

visa slide deck on logging and incident detection

Visa has a slidedeck posted Identifying and Detecting Security Breaches. Sounds fun! If you’ve been around security for a while, nothing will be new in this deck, but it’s a nice and short to breeze through for ideas if something is missing in your enterprise security posture. Every bullet point also makes for a decent item to review or ask your team (if you have one) to describe how it is handled. (I do believe in role-playing!)

Of course, the danger in a slide deck like this is how deceptively easy it makes all of this sound! 🙂

general insights, security context, and learning from mistakes

Two general lessons in infosecurity came past in articles via infosecnews today. These will sound familiar, since I’m sure I mention them often, but I’m feeling particularly introspective this week (usually this happens in the autumn; I’m a little early this year) and getting back to simpler basics in life and thought for a bit.

Federal Air Marshall Service Blackberry enterprise servers are behind on patches. First, welcome to the real world, and good job raising the issue of missing patches. Second, how big of a deal is this? For instance, are they BES patches or Windows patches on a system that can’t be reached via vulnerable ports (or the monthly critical IE patches)? In one case I care, in the other, it’s less a problem. This illustrates how contextual so much infosecurity is, and how easy non-technical (or technical yet misguided) people can warp efforts and perceptions. This is why checklists and scores can be a hindrance.

Hacked cybersecurity firm HBGary storms back after ridicule fades. This is a neat story, and I’m not entirely surprised by the results, considering the drama occurred in a a separate sister company. But it does illustrate that we learn from mistakes, and our security will improve after insecurity incidents. At least, we hope so. I think this is still hard in an institutionalized large enterprise, though (i.e. how much will Sony truly improve versus an HBGary?). Of courses, there are many lessons here, like make sure if you sell security you practice what you preach, you know your threats even as they change, know what security incidents may impact your company and how they will be felt, and so on.

this is why the dumb ones get caught

In a new bit of detail that I hadn’t read previously, Dave Lewis posted about the recent IT admin “hacking” incident that occurred via free wifi at a McDonald’s: “An information-technology administrator has pleaded guilty to crippling his former employer’s network after FBI agents traced the attack to the Wi-Fi network at a McDonald’s restaurant in Georgia. The administrator was caught after he used his credit card to make a $5 purchase at the restaurant about five minutes before the hacks occurred.” Yeah, brilliant.

So, what should this guy have done? I have ideas, and I’ll assume we’ll stick to a McDonald’s.

location
– don’t go to any store you’ve been to before or will ever go to again.
– don’t do this in your own city; go to some other large city; day trip!
– legally park blocks away from the McDonald’s
– or park districts away and take public transporation (paid for in cash)
– do this at normal, busy hours and especially if you see other wifi users present
– en route, don’t speed, don’t do anything to get your location logged
– don’t go through tollbooths (if possible) and try to avoid cameras
– if you can discreetly do it, maybe rent a car

equipment
– use completely generic laptop and gear; nothing you can’t part with
– don’t name your computer anything that reflects you
– change the mac address (just because you can)
– don’t install customized stuff on the laptop; reduce the amount you may leak on the wire
– hopefully it is cool but sunny so you can go with a hat, sunglasses, popped collar…
– truly lose or “lose” your computer after (wipe it, sticker it up, etc)
– leave your cell phone at home (or turned off)

you
– don’t draw any attention to yourself; be invisible
– don’t wear your favorite clothes; be generic or even disposable
– buy a small meal or drink to go (no trays)
– for the love of god, pay in cash; pay for everything en route in cash (no ATM stops!)
– take your trash with you and dispose later
– don’t hide in a corner, but don’t let cameras or employees see your screen without you knowing it

– don’t browse the internet or check your email; do your business and leave
– remove jewelry or cover any tattoos or recognizable marks/traits you have

I’m sure there are more ideas if I spent more time, and I normally don’t think about how to stay off the grid like this, but this is a decent start for being mischievious at open wifi.

coming soon: discussions on ips and siem

Coming soon are a series of blog posts from 2 sources that, at least to me, sound like they may answer similar high-level questions despite focusing on disparate technologies. Securosis will be posting about SIEM replacements and Bejtlich will be posting about IDS/IPS. I’m looking forward to views on both, and I think they may delve into similar sentiments.

Bejtlich basically framed his prologue around a tiny article about a cybersecurity pilot: “During an address to the 2011 DISA Customer and Industry Forum in Baltimore, Md., [Deputy Defense Secretary William] Lynn said the sharing of malicious code signatures gathered through intelligence efforts to pilot participants has already stopped ‘hundreds of intrusions.'”

First of all, duh. Second, this isn’t about IPS technology or any technology at all, really. This gets back to what I feel are three *very* important resources in security: people, time, and information sharing. I’d argue if *any* business had this sort of ability, they’d see value as well and we’d all issue a great big, “duh.” Third, the world Lynn is talking about is definitely different from my day-to-day; the concept of security intelligence efforts in any but the biggest private enterprises is a foreign concept, but I can fantasize at least! 🙂 [Aside: I’d include ‘organizational buy-in to security’ as another valuable resource that defense organizations have a big interest in; but that concept gets pretty abstract and overly broad. Essentially, if security sees a problem, they don’t get trumped by the business…every single time.]

Bejtlich posed the underlying rhetorical question: “If you can detect it, why can’t you prevent it?” Sounds quaint, eh? And it’s a valid question, though the problem is in my years of watching an IPS/IDS, they’re far, far too chatty to feel good about outright blocking all but the absolutely most obvious stuff. But that gets better if you put the magic ingredients of people, time, and info sharing into it (as well as visibility and power over the damned signatures!). Out of the box, no IDS/IPS is going to be a fun experience from any perspective that includes operational availability.

At the end of the day, I still feel like so many discussions come back to whether someone is looking for absolute security or incremental while accepting that our equilibrium will be in a balance between security and insecurity.

I might even entertain the discussion that metrics are actually the *wrong* way to go, since I don’t think there is an answer. And security can’t be nicely modeled without peoplethought and qualitative statements….

incomplete thought: less integration, more security value

I’ve been mentally writing and rewriting a post about SIEM and IPS and spending time on tuning alarms, but just don’t really have a ton to say that’s new. Then I posted (minutes ago) about how we can’t have nice things…. It got my wheels turning…

One point the author makes is, “[solutions] tend to require a bunch of integration work…” Well, that’s sadly true; every enterprise vendor customer wants something different, some checkbox or some strange integration. The problem is how the vendors will often satisfy the need, but then insist on using that as an excuse to include the feature in the base product for everyone. This bloats products, making them difficult and confusing to use. The age old, “we’ll get customer Y to fund this new idea which we’ll then resell over and over after.”

I also believe it leads to dumber products and large blindspots, especially in security products that lose sight of answering the core security questions, “What actually gives me security value?” “What value does X give me?” It’s hard for a vendor to globally answer those, so it’s nice to let customers actually put in their own work on the tool, rather than automate everything and make it ramen-noodle-bland. Instead, vendors seem to be answering, “What would you like in the tool,” without referencing back to the core questions.

Getting back to SIEM and a concrete example, it’s a frustrating time trying to tune alarms down to a level where I’m not inundated by thousands of “usually nothing” alarms and not cutting such large swaths of blindness that a truck can drive on into my network. All while working within the sometimes awful boundaries of the tools at hand. I’m often mentally lamenting not being able to parse the logs myself!

Spend enough time with a SIEM, and you start to realize it’s not very good from a security perspective except in hindsight (investigation and forensics) and centralized log gathering. Kinda like DLP, it takes hands-on time to get past marketing positioning and actually figure out for yourself what the real value is going to be. There are better detection mechanisms than SIEM alone. (If your SIEM alerts on an event your better detection tools shovel to it, why aren’t you alerting from the first tool? The tuning will be better.) [Assertions like these are why this is incomplete…]

I’m sure there’s marketing in there, and maybe this is a long-term vs short-term marketing problem where you want a tool to sellsellsell rather than be a narrow-focus, useful, and long-term successful tool like an nmap or nessus or something; your tool just *is* useful rather than superficially forcing it.

This might be one of the underlying and subtle problems of a compliance-driven industry, unfortunately. Certainly not a nail in the coffin of compliance, but definitely a problem.

your ceo thinks you don’t let him have nice things

Also via Twitter last night, I saw the article, This is why we can’t have nice things (in government). The article is short, and while targeting Canadian government, it mixes subjects by bouncing between “enterprise” and “government” technology, which I think are two different beasts.

But the point holds up either way: new consumerland happy creative tech is *not* necessarily easy to apply to enterprise needs.

This brings up the question: Which side should give ground here, the enterprise with its rigid needs and bureaucracy and efficiency/scale, or creative solutions by smaller creators (I’m hesitating finding an appropriate word there)?

My brain wants to side with enterprise, because the cost of supporting and cleaning up messes from self-imposed inefficient tech is grossly misunderstood outside IT (and accounting). But my gut really wants to side with the creative and (possibly) useful tech that abounds in the world today. You can probably do some really awesome things and get some excellent things done when embracing newest things.

From a security standpoint, it’s not as clear either, once you dive in. If a company of a few hundred people embraced new tech and allows consumer devices and such, does that put them at more risk? Probably. But do they *realize* more security incidents? I’d *guess* not, but only largely because this new tech is new to attackers as well! Attackers don’t have efficient attacks and may not understand it either. I’d say if anything increases, it would be accidental or opportunistic issues, or perhaps blended ones like when a SaaS provider on the Internet cloud gets their database popped and accounts divulged which are the same passwords your CEO uses on his Gmail account that also controls his Android device…

In the end, I consider this a good thought-scenario exercise. People who are bleeding edge on tech will learn things that tech teams in tech from 5 years ago never will learn, and vice versa, even.

For the record, this little internal warzone of enterprise vs consumer vs bleeding edge is, in my opinion, a healthy state to be in. Being in security isn’t about being paranoid about authority, but rather being in a state where you question and challenge everything (which roughly aligns with traditional definitions of “hacker.”)

The again, this article may just be a disgruntled developer whose “brilliant” ideas just aren’t being realized by the “dumb” masses… (The author also makes quite a few assumptions here, so it really does read a bit disgruntled, but the points end up being poignant!)

to do something good, you first have to do it bad

I can see why Twitter challenges and even betters blogs, as I see far more interesting and new stuff than I normally would with just an RSS reader as people I barely know retweet links from people I’d never know. This short article flew by this morning: “To write good code, you sometimes have to write bad code”.

I don’t even need to quote anything there, and if I had to make a change, I would remove, “sometimes.” This applies not only to code, and performance, and security, but to life in general. Taking some risks and being wrong is one of those weaknesses I struggle with regularly. Just have to keep saying: doing and being wrong is better than not doing at all. And that’s true pretty much every time I make a plunge. Sure I might get my hand slapped and I might even get egg on my face or skin a knee, but (and I have this up on my board at work): “A calm sea does not make a skilled sailor.”

There are so many little idioms I’ve stuck to me like velcro balls in a Double Dare physical challenge, like how we learn the most when shit hits the fan, growth through adversity, and so on.

For the article, I don’t think you *can* write good (and secure) code without first writing and learning from bad code. The problem is so many people in [web/mobile] development jobs only have homegrown knowledge and end up learning on the fly with production-level apps. We’re still in a relative infancy with computer programming (or higher level languages which change every 5-8 years like tech fads).

asking attackers for constructive solutions

I read nCircle’s Andrew Storms’ blog post, “Rethinking Black Hat: Building, Rather Than Breaking, Security,” and felt like joining the discussion. Essentially, Andrew is saying:

Think back to the [Black Hat] talks you attended and ask yourself how many of them promoted constructive ideas? I’m glad to know that just about every mobile device platform is broken at some level. It’s no big surprise that there are problems with crypto, networking, every OS and even the smart grid…

But let’s push ourselves to take that extra step forward and think about how we can also fix what’s broke. Wouldn’t it be interesting if future Black Hat briefings also had to include one or more ideas on how to fix the root of the problems being shown?

I’m not sure I agree with this, on a few levels.

First, the big one: Playing defense is draining. Playing defense involves policies, processes, politicking, covering all angles, and essentially playing a much longer-term game than an attacker. This is draining and timesoul-consuming. While I wouldn’t say offense and defense should be divorced with a hard line in the middle, I totally understand when an attacker can point out a weakness but himself has a weakness in effectively describing how to do proper defense against same attack. I get it, totally.

Second, the media coverage of problems is a huge driver. It’s true, the regular ol’ media picks up on the sensational moments where XYZ are broken, and that gets eyeballs. However, solution ABC gets next to nothing because, well, it’s boring. Which one is going to have a chance to drive attention, budget, action, and awareness? Including outside the hardcore geek circles. I’d argue that if solutions were so interesting, they’d have been done in many of these products and technologies and developments from the start. Doing things securely is still (and I’d argue always will be) an afterthought, so poking out insecurity in a sensational way is a state of normalcy, to me.

Third, look out for post-con highs (or lows, in the case of security!). It’s great to come out of a con-type of gathering encensed with all sorts of great ideas. For hacking cons, it’s easy to come out of them feeling like everything is fucked. I guess I look at that as a sort of healthy state of things. Insecurity isn’t going away. Even the lockpick industry doesn’t try to make unbreakable locks (ok, minus marketing spiels and executive dreams), but instead try to increase the time-to-pick metrics. Andrew certainly knows this, so isn’t much of a point for me.

I really do get Andrew’s point, and I would even agree for the most part that it would be nice if attackers also offered constructive information on how to do things better, but I don’t think I’d ever actually call anything out for it and even voice that concern much at all, for fear of devaluing upsetting the current equilibrium between offense and defense. Granted, there are counter-point to my points, certainly…I may be playing a bit of a devil’s advocate here. 🙂

As a last point that I even hesitate to bring up, but really have to since it’s like a little itch poking at the back of my brain on this topic, I would not want to stifle the exposure of problems under the heavy foot of, “be constructive.”

There are 2 scenarios in mind for this:

Situation 1
Employee: “Hey boss, I see a problem with this application here where it doesn’t validate people properly.”
Boss: “That’s nice. It’s now your problem to fix, go to it!”
Employee: *sigh* “…next time I’ll just shut up.”

Situation 2
Employee: “Hey, your application doesn’t validate people properly. I can break it by doing blah.”
Developer: “It’s fine. Prove to me you can do that and that that is bad.”
Employee: *sigh* “…next time I’ll just dump this to full-disclosure and let you handle your own research.”

In either case, our approach to insecurity or issues can have a huge impact on how researchers (or those who point out problems) may become dis-incented to say anything at all. I agree when a boss wants optimism and solutions, but I disagree when said boss dismisses an issue when the messenger has no solution of his own.

(There’s a sub-point in here somewhere about a non-expert consuming information about how technology X is broken, and then wanting the solutions handed out to them when maybe they’re not the appropriate audience or consumer of such information. Sadly, I don’t know how to articulate that on short notice without offending or being extremely confusing… For instance, I might hear that CDMA is broken, and I might decry that the presenter should give solutions, when I only want that because *I* don’t have the solutions either…)

keep it simple, infosec…

Since I saw this site for the first time, I glanced at a few articles on the MyInfoSecJob.com site, particular the security challenges. Reading the comments (i.e. solutions) for the first challenge, this pretty succinctly illustrates why infosec is so frustrating for business and IT persons! The range of answers is phenomenal, from simple to complex to flat-out suggesting complex setups with specific hardwired vendors and various other things.

I don’t think the answer to any, “help me secure this,” challenge should be to grab your favorite 600 page IT security book and thump it on the desk like you’re some pimp on Exotic Liability flopping your meat on the table. Keep it simple, and keep it on task with the information presented. Nothing in a network/data diagram really begs for a sermon about file permissions, and OS patching, and extraneous complexity for what is obviously a small shop. If you want to get further down that road, you can’t do so intelligently without more information. You’re just going to lose your audience (or demonstrate your lack of experience when suggesting over-the-top recommendations or flatly inappropriate ones…).

Anyway, based on that security challenge, these would be my simple recommendations:

– Replace the hub with a managed switch, assuming that is the basis of the underlying network connecting the users, the servers, and the router together. That’s the one real question the diagram makes me ask, “Is the hub separate, or is that what the blue ethernet network bar is supposed to be? You can pick up a soho one if you want for $100, or drop a grand or two for an enterprise level one.

– Drop in a firewall/VPN hardware device behind the router (i.e. between the internal network and the router). Configure this to position the web server into a one-armed DMZ, and set up necessary firewall rules to allow the access shown as needed on the diagram. Configure the VPN so external people can log into it and get to the fileserver as needed. Get a decent enough one that you can budget for; the features and support will be worth it. As a bonus, make sure VPN users are in their own subnet and even segment off the fileserver to its own, and configure firewall rules as necessary for everyone’s access. In the absence of other technologies, at least losing one part to an incident won’t caused the rest to be suspect; at least not by default. At worst, grab an old PC and figure out a tool like Untangle or IPCop…

This leaves open questions, but they’re questions that require further dialogue with the client.