the it as a business trainwreck

Bejtlich recently posted about an article the trainwreck of running IT as a business. I suggest reading it with his emphasized points, and then reading the original article on InfoWorld. I’m tempted to repost the entire article, just because it is that thought-provoking; a bit of a surprise for rags like InfoWorld, which makes me scared that they may find this rogue article and remove it!

Seriously, read the article. Everything below this point is really just rewording the points Bob Lewis makes and Bejtlich emphasizes.

The article is chock full of good points, and I myself am in a company where IT is mostly run as a separate business silo where my ‘customers’ are other internal employees. Of course, this turns us into a utility company who is not necessarily being innovative and ahead of the curve, but rather increasingly pressured to reduce (or chargeback) costs and keep things flawless (classic negative conditioning). This also makes us captive to the culture of “the customer is always right,” or “give them the pickle.” (We’re not children anymore; the customer is not always right, and it’s only ok to give someone a pickle when their pickle request is reasonable.”)

Likewise, we shouldn’t be fighting against the business initiatives, but that is often how it feels. And it feels that way because our internal “customers” make requests/demands of us similar to how customers make often unreasonable demands of their vendors. It’s a disconnect. Not a communication disconnect, but rather a disconnect in the concept of shared ownership that comes from being all part of one business (which is ironic considering we’re employee-owned).* If we weren’t conditioned by the business to be risk-averse, we’d likely be on top of or already doing some of their requests!

Then again, maybe this whole article’s idea about how bad “IT as a business” is, is itself a product of even more pressure on IT budgets and cost. How better to eliminate that as your pressure by putting it on the shoulders of the whole company? Or it may be saying, “Help me, help you.”

I really love this part, and it is something I live through weekly, especially with how closely I work with our internal IT and developer teams:

“Or try to explain your file and print server hosting rates. It doesn’t matter that part of that rate is full backup and off-site storage. Or as part of a clustered environment you have built-in redundancy and that ensuring the server is updated and secured appropriately is part of that cost. Their friend Joe hosts these things on the side, and it is much cheaper.”

When IT is a business, selling to its internal customers, its principal product is software that “meets requirements.” This all but ensures a less-than-optimal solution, lack of business ownership, and poor acceptance of the results.

Other IT persons (developers, largely) are notorious for this. The classic example is, “Why does storage cost so much? I can go to Best Buy and get a terabyte on an external drive for $100.”

In fact, I would go so far and say that this whole problem of being an internal customer is compounded right now with the consumerization of IT; i.e. the influx of Apple products, mobile devices, cloud-based storage (which is just an “enterprise” way of saying “on the web” for most of these services), and outside hosting/solutions. This is why we’re losing this battle suddenly: the “customers” are making the recommends demands; not IT. IT is trying to avoid more black eyes, delivered as a result of being a “separate business.” (Managerial personalities can make an impact as well, especially those who refuse to ever be wrong, even when their requirements are horrid.)

If I had to nitpick on the original article, it would be the assertion that this whole “IT as a business/chargeback” issue is not that clearly a product of the outsourcing industry. I think business largely doesn’t know how to handle IT as an integral part, so the default behavior ends up fitting the “IT as a business” model where budgets are constrained, IT managers are pressured to justify costs, so they chargeback as a way to illustrate who is costing them what. This is a top-down problem; not a sideline/outsourcing problem.

* What is even more ironic, is the effort to force more innovation into the business over the last year. While I think it is wrong to “force” innovation and make it a requirement, it is even worse to try to do so in an environment where risk-averse actions are rewarded. This is whole topic in itself…

you compared how many web app vuln scanners?!

Shay Chen is apparently a “sec tool addict.” As such, he’s taken the time to compare a huge list of web application vulnerability scanners and present his findings. This is way too huge to digest quickly, so I won’t speak to his accuracy (even if I could spend the time to do so!), but this report can serve several purposes, the least of which is a very long list of tools to use and abuse in web app security. Hopefully he has somewhat valid results. I expect most tools have a sort of give-and-take when it comes to detecting vulns and being useful. It would be folly to try and rank them against static tests, as I’m sure you’d need a blended approach to get the most chance at high coverage. (He basically concludes as much, if you scroll down far enough.)

quick security livecd roundup

Seems to be a bit of a renaissance of security-oriented livecd distros floating about. Somewhat exciting since the long-past days of things like Phlak, Knoppix-STD, and some other one that had some green in it, was also an acronym, and included the letter “G” somewhere…I forget.

SamuraiWTF has been updated!

SecurityOnion has been updated!

DEFT will soon be updated!

BackBox is new?

Blackbuntu is new?

For any other ideas, check the livecd menu category to the right. Yes, I’m missing some like Helix or Nullbound. I just don’t always feel right grouping [off-and-on] commercial offerings under the ‘livecd’ category. Others like Russix (wireless-oriented livecd) seem to be MIA.

valsmith on the evolution of pentesting

To welcome in a new year, trundle on over to read a recent post by Valsmith on how “penetration testing is rapidly becoming obsolete” (and read the great comments). Yes, this topic has come up in various forms the past few years, but too often those claims are made by analysts or people who aren’t actually doing the tests. Or if they are, what they’re really saying is, “Pen testing is changing from how we knew it.” I think Val’s post is more coherent than most.

I’d ramble on more about it, but it’s all been said before! I will just say that there is still going to be a market for people who can parse the security results and go the extra mile to produce real value, inclusive of pen testing. If you think IT/Ops can interpret and handle even today’s automated scanners and log managers and tools and vuln scanners web app firewalls and DLP auditing…you’re not living their reality. That sort of approach is usually called, “lip service” or compliance-oriented security. Seriously, how many auditors still miss the obvious things or get famboozled when confronted with too much technical smoke and mirrors?

the motivation of security talent

Just wanted to point back to a post from Bejtlich, specifically talking about a recent Tweet of his:

Real IT/security talent will work where they make a difference, not where they reduce costs, “align w/business,” or serve other lame ends.

That doesn’t mean security shouldn’t align with business and all that jazz, but those items are not really the goal of anyone with half a good mind in security. They want to do cool things and make a difference. They’re passionate, enthusiastic about security, hacking, and defense. Who gets enthusiastic about aligning with business or reducing costs? Yes, some people do, but I think there is little intersection between those people and badass security geeks.

boa reacts to possible leak threats

Funny how the tangible threat of action/leaks “possibly” against Bank of America has caused them to spring into action. Hopefully BoA is only ramping up internal investigating and not actually doing operations differently, otherwise that would beg the question, “Why weren’t you already doing x____?”

It’s also funny how much power Wikileaks has right now. Even simple short-term bluffing (if it only amounts to that) causes more security enhancing work to be done than so many security professionals can dream to get accomplished over years of internal risk evaluations that dance around full-on FUD alarms (execs and sec pros have different tolerances to where that FUD line lies…).

I really didn’t care much for Wikileaks vs governments, and somewhat wondered if it would stop there. Indeed, it looks like this may spill into large corporation realms, which interests me much more. This is a give-and-take topic all itself, and I’m resisting urges to opine about it further…

What if Wikileaks dropped hints it may be dropping data on your company soon? What are the chances of such data leaking?

What if someone you partner with is the next Honda/Silverpop and you suffer a breach because they suffered a breach?

the big gamble of security

Gawker recently had an issue that exposed the security of their web code (and overall posture) as crap. Not surprising. Reading the >comments to an article about it on The Register also yields no surprises.

There are plenty of managers and others who don’t understand the consequences and risks of not paying proper respects to security. They truly do need educated.

But there are others who *do* understand the risks, and who *still* make decisions that leave security lacking. This is what I call the big security gamble. And it is just a matter of the risk a company wants to accept, or at least put off until such a time (if ever) that something does happen. See, it’s that “if ever” part that really starts the shoving matches. In security, we really should be talking about the inevitability of an incident. But human nature won’t necessarily accept that inevitability. You really might be able to go for many, many years without suffering (or at least knowing you suffered) an incident. Kinda like not having car insurance and yet still driving…

It’s hard to argue that deadlines should be pushed in order to get security done right, especially when a product may be new and no one even knows if it is viable yet or going to succeed at all! What comes first, the product (and resultant revenues) or security spend? [I like to also say, to head off a natural line of argument: which comes first, learning how to assign a variable or learning how to assign a properly bounded and verified variable?] Of course, once it does succeed, that inertia of ignoring security is hard to turn around until something bad happens…

The fact is, economics will trump security. Hell, economics trumps *safety* even (though few people like to talk about that). This is life.

That sounds exceedingly defeatist and cynical, and in a way it is. But it really, really helps keep a security geek sane by coming to terms with reality every now and then. That won’t stop me from always giving the ideal suggestions when asked for, or trying to gain as much security ground as possible when given the chance. Or strive for doing security correct in the first place.

If I got pissed off at everyone who had a security incident or lapse or who didn’t cover every hole and feasible issue, I’d be pissed off at everyone. Granted, there is negligence and stupidity…but….you get my drift, I’m sure.

bad things still happen to good systems

I’ve been quiet about the whole Wikileaks thing, and I likely will remain so. I don’t have anything to add that hasn’t been said already, and I gravitate closer to the fence than even I probably admit to myself.

Nonetheless, I won’t refrain from posting to nice articles on said subject, like this one from Chris Swan posted at Fudsec. I like his practical thoughts on the subject.

To add: This was a failure in a trusted user leaking docs. Would technology have prevented/alerted on this? Perhaps. But ultimately this still boils down to humans (talented staff, not just in security log-watching…) solving human problems (background checks, education, management…)

Now, maybe if they had body scanners and pat-downs whenever you enter or leave locations where you can view/manipulate sensitive data…

a little bit of blog history

Just because I was curious, I did some checking on my site here. I have 1,454 posts here on Terminal23.net dating back to 8/9/2004. That’s 19 posts per month. Prior to that, I made all my posts on my personal blog at HoldInfinity.com (less geek, more personal blog), which has 268 posts since 10/05/2001. I’d say I’ve been blogging about security since 2004.

Even prior to that, I’ve had a web site since 1997 (maybe late 1996 if I really push the definition), but are no longer available except maybe on a floppy somewhere in a desk.

jay adds 5 infosec rules to live by

I like lists. Jay Jacobs over at his Behavioral Security blog posted a list of infosec “rules to live by.” Can’t say I disagree with any of them, but thought I’d add to the discussion a bit!

Rule 1: Don’t order steak in a burger joint. I don’t really have much to add to this excellent point!

Rule 2: Assume the hired help may actually want to help. I agree with this, but I’d also play with changing the wording in one of two ways. First: “Don’t assume anything.” Second: “Assume the hired help will follow the path of least resistence.” I know, I’m twisting that rule around almost 180 degrees. I get that awareness can (and does!) foster the ability for people to make proper decisions. But I can’t assume or rely on that enough to call it a rule. I really like the last line in Jay’s paragraph on this, though. Still, I think he makes a similar point he went after in this rule, in the next few rules.

Rule 3: Whatever you are thinking of doing it’s probably been done before, been done better, by someone smarter, and there is a book about it. Absolutely! This is where being in touch with the greater security community is invaluable.

Rule 4: Don’t be afraid to look dumb. I can’t say this enough, especially to myself. Don’t be afraid to look dumb! We only get one life, usually one shot at things like first or lasting impressions. Don’t waste yours and other people’s time with false facades. Take a shot, fail, learn, do it better the next time. Lay your balls out there. As I’m fond of saying in the sysadmin world: we learn the most only when we’re troubleshooting issues or in the middle of failure. This is why “fail” and looking dumb need to be intrinsic cultural values in an IT organization.

Rule 5: Find someone to mock you. I’d probably reword this rule, but the point absolutely stands: find people who will honestly challenge you, mutually. This is the age-old, “Surround yourself with people smarter than you,” maxim. But really, it’s about mutual respect and being able to follow rule #3 and still be a man (or woman).

exotic liability 70 on honeypots

I have made my opinions on honeypots known, and while I think they’re fun and useful to those who have the time or focus on analyzing attackers and their tools (I can’t stress enough that there *are* orgs that *should* be using honeypots [like F-Secure!]), they’re just not useful to most organizations (in fact, almost all, if you ask me).

So I was a little skeptical when listening to Exotic Liability #70 and Lenny Zeltser came on and the topic of his recent blog post about honeypots came up (skip to 56:30). Chris Nickerson gave excellent reasons against bothering with honeypots. That could have been me talking, almost word for word. Researchers love honeypots, but that’s part of the problem where researchers sometimes just don’t get what really gives value to an organization *right now* in their security posture when they have limited resources (not grants or research funding).

But Lenny made one interesting observation about giving your talented staff a honeypot to play with otherwise they may get bored and quit the organization for somewhere more exciting. I think that’s an interesting point, but probably not one that will matter too much. First, not many orgs have honeypots, so it’s not like a lack of a honeypot to play with is something that a staff member can go to another org that has them. Second, if sec staff is bored, something is wrong. I can’t imagine that any real security pro is ever bored. Frustrated and disheartened, yes. But truly bored? Never. Truly, never.

Lenny’s article makes a bit more sense when you dismiss the idea of putting honeypots out on the public internet, which Lizzie helped expose in the interview. Then you’re really just using honeypots as another internal tripwire (or for those with the time and talent, a way to examine attacks). Honestly, I’d still suggest putting more of other tripwires in the environment. Just like Chris says, I can’t think of any situation where I would ever suggest a company try out a honeypot in their environment. There are far, far, far too many other things that can be done.

(In economics, this is called opportunity cost.)

Next, Lenny’s article mentioned that really honeypots are just for mature security programs. But how many executives and even middle managers will *think* they have a mature security program? Then hear about honeypots and how infosec researchers said honeypots are useful, then made that a new project or outright purchase? I really don’t think anyone should think about honeypots until outside infosec professionals “certify” their programs as mature *and* they have some vested reason to analyze attackers and their tools (i.e. you research and then sell security). It’s important to make sure that an outside entity labels you as “mature.”

Lenny also mentioned the idea that an IPS could, instead of just preventing the attack, to actually pretend that the attack will work and entice more interaction with the attacker. This is also interesting, but really does break down once you analyze it with any experience in security teams in real organizations. First, the level of sophistication in that IPS/IDS or whatever tool would have to be huge in order to entice anything except very specific scripted events. Second, why bother? I would rather my IDS/IPS present me with packet captures on what it alarms on, and not bother with enticing those attacks and giving me even more captures. And so on…it’s an interesting idea, but way too sophisticated for any of these companies or boxes that try to be “turnkey” or automated. This still all comes back down to talented staff, as usual, anyway.

german hackers target celebrities

German hackers gain access to celeb computers [namedrop Lady Gaga for more attention]. I know it is fairly common to have a Twitter of Facebook account hijacked, but I’m always surprised we don’t hear about more celebrity accounts being hacked. Then again, just because we don’t hear about it doesn’t mean it isn’t happening on a regular basis.

What’s really fun is how Twitter/Facebook expose the interaction between celebrities. You want to target a high-profile celeb? Maybe start by examining all those people whom they follow on Twitter and find the normal joes they trust/listen to. (I can’t be the only one who sometimes wonders who that 1,000,000-follower celeb has on their tiny 75 followed people list.) And so on. You can really spread some damage once you get into a few systems and start preying on the cyber-social aspects.

I once had a dream (as in, a daydream, not a life ambition) about being a security/computer expert for celebrities. I mean, they’re just the same as any old joe (or any old C-level) and have the same issues and lack of knowledge as anyone else. Plus extra money to throw out for dedicated service. I imagine that market would be lucrative with some word-of-mouth.

Though I guess PR agencies and agents would rather cover those zones. Who knows.

Article via infosecnews mailing list.

it’s not going away or getting any better

(Looking back, I seem to have kind of vomited out a trail of thoughts in this post…pardon the ramble.)

We really have to live with certain things in security. Issues won’t go away. And none of us will ever agree on what to do about it (get 10 security consultants in the same room, even some from the same firm, fill out a questionnaire, and you’ll get 10 different strategies for security).

Brian Krebs does some great research and coverage (as usual…seriously, why aren’t there more badass [real] security journalists like Brian??) of an escrow firm suing a bank because attackers made an “authorized” wire transfer out of the escrow firm’s account.

This situation where business-owners have computer systems that get owned and then victimize their bank accounts isn’t going to go away. Ignoring what the bank can do to help (multi-factor…), I both like and dislike Brian’s suggestion:

The cheapest and probably most formidable approach involves the use of a free Live CD, a version of Linux that boots from a CD-Rom.

This is really good advice, but I would temper such advice with some cavaets.

First, I’m a firm believer that, ultimately, an OS is only as secure as the person using it knows how to keep it secure. Way more people have a better chance with Windows than they do with Linux in knowing how to keep it secure.

Second, I wouldn’t necessarily expect a Linux OS to always be compatible with (or supported by) whatever your financial institutions implement for their website or authentication scheme. In some cases, I suspect you won’t be officially supported, and that could be a problem when push comes to shove.

Third, if you have any system issues (business owners are usually not computer experts), you’ll have an easier (and cheaper) time trying to find some support for a Windows box than for your Linux livecd. This might depend on how much you intend to DIY and your aptitude for learning Linux…

Fourth, mention Linux and/or livecd and non-geeks will give a look that is worse than a blank stare: the “yeah-I-won’t-ever-understand-that-and-thus-will-trust-it-less-I’ll-say-I’ll-look-into-it-but-really-do-nothing-because-I-don’t-have-the-time” look.

I really, really like the idea of a dedicated netbook or system that is *only* turned on and used for financial operations or updates, but runs on Windows and is not necessarily of the Livecd or USB-operated flavor. Most people understand and take to Windows quite well, banking sites will support it and the popular browsers that run on it, support is usually easy, and so on.

Don’t get me wrong, if a business is willing to go the Linux livecd route, that’s definitely a worthy suggestion, but the reality gutcheck tells me to more often expect the dedicated Windows box to win out.

Really, smaller and even medium businesses are just screwed as a default bottomline reality. They’re almost certainly running Windows with Internet Explorer and don’t have any decent sort of web browsing filter. This means that over time, the line that indicates the odds of being infected approach 1 (that’s math).

Businesses pretty much need some level of IT these days, as simply a necessary part of having a business, much like a telephone, payroll, accounting, desk/printer services, etc. Unfortunately, while everyone eventually does things like accounting in pretty much the same way (unless you’re being dishonest, there are only so many ways you can manipulate numbers, that are acceptable to the government), your computer systems/IT have an infinite number of ways they can be creatively used and built. This is one big reason we get so much angst between business and IT, or the CFO and CTO, or the business and its insecurities. There’s no “correct” way to do it, but rather subjective measures on what the effective ways to accomplish things are (to the business, a cable mess and fans in the server closet to keep 10-year-old servers from overheating is just as correct as a polished, professional data center…as long as they have their availability up and cost down).