it’s a geek lifestyle thing

Jeff Hayes just wrote a nice post about hiring and retaining “Millenials,” those workers aged 18-30 (whew, that includes me just barely!). I like what he says, and I really think you can make some relatively small expenses to really keep employees happy and productive. I know Joel Spolsky advocates doing the little things to create a good working atmosphere. Dotcom excess is typified by $900 Aeron chairs, but is $900 really all that bad compared to the productivity that can be gained from a developer paid $70,000 a year? Perspective…

I would also add that people myself and younger really do use the Internet as an integral part of our social lives. This means those of us geeks who work in technology have very blurred lines when it comes to work and home life. I’m on a computer at work, I’m on one at home. So please don’t stress at me if I do some personal things during the day, since I’ll likely do some work stuff at home when inspired. It’s not just a 9-to-5 geek thing; it’s a lifestyle that encompasses everything that is me and what I do.

what is with the boring posts

I had a lot of work going on in the latter half of last year, and am only now recovering enough to tinker with things at home again, hence my lack of interesting technical posts and such. I’ve gotten myself back on the wagon by beginning the migration of my webserver from WAMP to LAMP, and this blog itself from MT 3.34 to MT4 (which I hope will fix my comment rss feed). So far testing has been positive, and I’m sure I’ll post some sort of step-by-step on what I did to migrate in case anyone wants to copy meh.

One thing I’ve wanted to do this year on this site is make less rant/discussion posts and more technically useful posts. I’ve gotten away from it lately, and it definitely makes me feel a bit guilty.

the misplaced blame of a complexaholic

This article may make you angry, or it may make you agree with it. I’m a bit of both, but I don’t particularly like the presentation. How’d I see this? My CIO passed this out today to people in her department. Michael H. Hugos (MHH) talks about IT complexity in The Recovering Complexaholic, from the Opinion section of ComputerWorld (Nov. 5, 2007). Let’s check it out a bit.

There’s a standing joke that business people never have to ask IT how long something will take and what it will cost because they already know the answers: It always takes a year and costs $1 million — and that’s just for the simple stuff.

When I first read this, I actually went the opposite direction. “Business people never have to ask IT how long something will take and what it will cost because they’ve already made up their minds that it will be immediate and cost nothing.” Oops, he went the other way with that joke!

MHH then goes into how “consumer IT” is better than corporate IT, which I think he is confusing as the overall SaaS movement. I’m not sure I would consider that “consumer IT.” Does “consumer IT” know anything about managing 50+ systems, softwares, policies, accounts, or pieces of data? Not usually. Just because you can access it from your browser at home on your own computer does not mean the solutio is “consumer IT.”

He also opines about how IT makes things so complex, that nothing gets done and when it does, costs a lot of money. I think business as a whole is as guilty of this as IT. Business can often not make decisions and leaves such things to IT to sort out. IT then has to cover all the bases and make processes so robust that they become complex monsters, just to CYA in case something doesn’t meet some unspoken requirement. Business can condition IT to overanalyze and overcomplexitize solutions just as much as an IT person can get caught it in themself. This is basic psychology 101 conditioning.

I truly think complex IT can be just as successful as cowboy IT (come on, that’s what MHH kinda sounds like he wants…get things done, think about it later), but it all depends on the personality of management and aligning IT to that personality. If the org is a large slow-moving organization that expects this project only to be done once, you might need to make it complex and large. If the company is small, fast-moving, and likely to revamp the whole architecture in 3 years when it makes a big break and growth spurt, then keep it simple.

I really buy into the idea that we just need to Get Shit Done. I also buy into the desire (not need, mind you!) to keep things from becoming complex. IT people really do hate complexity as much as anyone. It makes problems difficult to diagnose, compounds itself over time (try to build a complimentary system to an already complex system…it becomes complex itself), and typically promotes instability and insecurity. Besides, we want to accomlish things as well, not just let something stupid drag on for 12 months.

Yes, IT can perpetuate the problem, but I think the problem is not something you can lay on IT alone, but rather everyone involved. I think this is called ‘alignment,’ but I could be stepping outside my pay grade there.

MHH asks a few rhetoricals: “What is our objection to this stuff? That it’s not scalable in the enterprise? That it’s not robust? Or that it doesn’t feed our addiction to complexity?” These questions depend on what management wants, and trust me, if IT has been bitten by mgmt in the past, they WILL know how to approach these answers. When I propose “consumer IT” as a solution to problem A, will management later get frustrated that it can’t be tailored to what our processes are (instead we have to use the product the same as everyone else)? That’s a valid concern, especially when IT knows Mgmt can’t stay within the lines of the solution…

my little law of security as an enabler

I’ve quietly been compiling a list of “laws” for my paradigm on security. I like lists of “laws;” they help put one into a proper mindset where questions are answered before they’re asked, leaving time for more important things. I used to have such a list of laws when it came to dating girls back when I was in college. They were great, but I’m still unmarried so maybe they worked too well…oops!

One of my little laws (they do frollick in a quiet pasture like my little ponies) sobs a lot these days:

Security is not an enabler except in three cases. First, when the organization is in the business of security (software, hardware, services…). Second, when security is required for the business path to exist. Third, when economic forces suggest that security is the cost-effective answer (e.g. cost of security is less than the cost of fines or lawsuits for breaches).

I often hear about how security should be an enabler and not an inhibitor. I don’t buy that. In regards to the second case above, this only happens when a regulation, expectation, or law exists that places an economic leverage on the organization to meet a level of security, which can then allow business to occur. This is a natural extension of the inverse relationship between usability and security. This says to me all other security efforts are not enablers, so move on to more important matters and proper frames of mind.

link to the top ten myths of pci dss

I’ve long been able to identify an rss feed in my news that dealt only with PCI and be able to quickly skim it or remove it from my feeds. “PCI doesn’t really affect me, although I should stay aware of it.” Ok, I know that’s not true, I do need to know it, and this year that becomes more obvious. Our company has a soft goal of becoming PCI compliant. And, yes, it is driven by a large client who requires it.

In that light, I’ll still have to keep up to speed on PCI nuances and Q&A posts. Walt Conway over on the PCI DSS News and Information blog recently posted his top 10 myths about PCI DSS (part 1 part 2 part 3).

“And if we were compliant at that moment, we are still only one system change away from being non-compliant.”

And on the myth that “PCI is inflexible with unreasonable technical, security, and business requirements,”

I hear this one a lot, and I do not agree. Nothing in PCI is not already a best practice (so much for being unreasonable), and there is the option of a compensating control for any requirement (so much for inflexibility).

I feel that PCI is tough when a) the business doesn’t know what the business is doing (processing cards) or b) thinking about and doing security is way behind.

the new face of cybercrime, trailer

From Fortify Software comes this trailer called The New Face of Cybercrime. The part that really spoke the loudest, in my mind, was near the end when Ranum came in to essentially say that no software is so trivial that it can be made without security in mind. Who knows when that software will be picked up and used in a way that people’s lives depend upon it. It looks like this full video might be a staple of any corporate bookshelf for awareness training.

My only beef on this? It appears sponsored by Fortify Software, and they definitely have a stake in saying the security of tomorrow is not in the network but rather in the software and the software development lifecycle. This could turn out to simply be a big budget advertisement…

info sharing efficiency challenging more than just riaa

I was reading Marcin’s post today which included a mention about the boy who created a remote to change tram rail junctions, leading to a derailment. I also recently bought my first Rubik’s Cube ever, and then looked up the theory on solving it (no, I don’t have the time of mathematics interest/patience to truly learn it, but I wanted to know the approach and algorithms involved…no, I would never have figured it out myself, I think). I also read about remotes turning off televisions at CES, disrupting presentations.

What do these mean? I think there are still a lot of things that are very hackable. While the cyberverse keeps progressing at breakneck speed, much of the analog world is still using old technology that greatly relies on hidden knowledge. In the past, much like the Rubik’s Cube, I really wouldn’t have easy access to solve the puzzle. These days, information sharing and problem-solving is amazingly accessible to so many people.

bejtlich on finding competent security personnel

Bejtlich posted an excellent email from a reader of his asking how to find competent security personnel. What a wonderfully worded email, and rather than post a huge comment on Richard’s site, I thought I would pollute my own blog with it instead! I’ll try to keep it bulleted (somemthing I’ve been striving to do this year). I also printed out the questions; I try to always honestly answer such things as practice.

1. Unlike some commentors, I really like the questions posed. Sure, they can be vague, but part of a hiring question should be to get the analyst to analyze. What is the interviewer *really* going after, and can you help them along by accepting and adapting to the question? While you’re fiddling over details of the scenario, the incident is still happening.
2. Look for analysts in the right places. If I knew this job and it was in my area, I’d apply or pass it on to others. Are you finding me? I would be willing to bet that the post on Bejtlich’s blog produced several job candidates; I’d bet a better return than current efforts have yielded! Get to places where we hang out….Security Focus has a job board, SecurityCatalyst Forums, and so on. Get your own security blog and join the Security Blogger’s Network to get good exposure and post the job. Or have one of them post it up. Check with your local Infragard (a great place to network!) or even other local professional tech groups like CIPTUG to see if they know people interested or maybe one of them wants to cross-over.

3. I can say the term “senior” can be daunting. Newer security-inclined persons may avoid such a job title, at least at first. On the other hand, the term “junior” might imply entry level or fresh out of college and you might deter some people away. I like more neutral titles, personally.

4. Make sure you’re properly valuing this role. A lot of people will say a manager needs to pony up and pay competitive salaries, but that is often out of the manager’s hands. Perhaps the company itself needs to properly value the position/need and advertise properly. This might mean dropping the “senior” off and grooming some more green persons.

5. I think Richard is correct, there are still few people who can properly answer, let alone actually do, the answers to those questions. However, I think there is still a good number of people willing to be groomed up into such a position or groom themselves up if given the chance.

6. “Am I setting the bar too high?” Maybe. I think accuracy in answers can be fixed, but personality in handling the questions is much more difficult. If they don’t know the difference in responses between a web attack and a client side buffer overflow, they can quickly learn via process documentation or after the first one or two incidents of each. Are they capable of detail, learning, and improvement? Then again, that’s maybe the difference between the “senior” and the “not-senior” guys out there.

intolerant of the inevitable

A reminder-to-self about a phrase I should start using more: intolerant of the inevitable. A security breach is inevitable and there is no silver bullet to save us. Yet we’re so very intolerant of such an inevitability. It’s a double standard we need to keep addressing. This is not necessarily a digital security problem, but rather a cultural one. (I had examples, but I’ll keep it at this for now, for sanity.)

(If you know the place I posted about this in the comments, then you might be a stalker!)

mass sql injection

I mentioned yesterday a report about tens of thousands of websites being infected by some malware. SANS has an update which also points to the ModSecurity blog. Turns out this was some automated process that sought out SQL Injection-vulnerable sites, injected the script, and moved on. Impressive!

This kinda drives home some concepts.

1) Think of an attack today that seems unlikely or something that an attacker would do manually. Plan on that attack being automated someday. Yes, web app secs will say some things aren’t like that, like business process errors, but for the most part attacks can be automated, just like vuln scans can be automated. This can be done by a small number of scanners running, or even a rented botnet that can infect huge swaths of systems quickly. The next worm? We don’t need to worry about the next worm when botnets can act as one at will. Just give them a vulnerability, or now even a class of vulnerability that can be scanned for, and bam, overnight firestorm. And for every site attacked in the last few weeks, that can turn into hundreds of infected visitors to that site.

2) If you check that Google search for infected sites, you’ve just got an inventory of sites vulnerable to SQL Injection. Do a diff on them over the next few days, and you’ll filter out the sites with good response to incidents. Want to steal some info or do more targeted and nefarious things? There’s your target list…

3) Mitigations? Sure we can erect barriers in WAFs (ugh) to help block these things, but it all comes back down to secure coding, regular scans/audits, change control tripwires, and monitoring. What’s worse than being hit by this attack? Being hit and never knowing it.

recent mass-hack of sites

Saw some news today about “94,000 sites hacked” and sending users to a malware-ridden site. That’s a hell of a lot, and prompted some investigation on our team. Sadly, we’ve found very few useful bits of information about what happened (I suspect some common piece of software on all these sites was pwned…analytics? ads? site mgmt?). We have, however, decided to block two URLs, *.ucmal dot com and *.uc8010 dot com as they are distributing malware. The Google search linked in that first article shows an impressive array of pages and sites…

irony in local admins circumventing group policy

Mark Russinovich is a Microsoft employee; you may have heard of him. On a recent blog post he describes how his Autoplay feature in Vista stopped working due to a Group Policy update. Mark, being a coveted local administrator on his laptop (a work-assigned one, as implied by the post) found the setting to re-enable AutoPlay. And to prevent Group Policy from reverting the setting back to what his admin wants, he opted to block it by adjusting permissions.

Now, Mark likely has a work-related reason to use AutoPlay, and took steps to get his work done (giving a demo of the feature) by circumventing his admins and likely corporate policy. And then posted this for others to see and learn from, both technically and by example.

Mark says,

A local administrator is the master of the computer and is able to do anything they want, including circumventing domain policies…and that’s just one more reason enterprises should strive to have their end users run as standard users.

So, is Microsoft wrong for allowing someone like Mark to run as local admin? Or is Mark wrong for circumventing that trust? For lesser employees, I would be more forgiving, but Mark full well knows what he’s doing. Likewise, if anyone qualifies for local admin rights on a corporate-issued laptop, Mark is the least of your worries. Should Mark work with his GP admin to either do this better or make Mark an exception (admins love exceptions)? Things that make you go hmmm.

I just find this all unintentionally funny…and a horrible grey area for us professionals.