I only just noticed that the ISC SANS Handler’s Diary now allows comments. Finally!
Category: general
intrusion detection systems and analysts
An interesting article (and comments) about IDS technology over at Security Focus. IDS is still a discussion-starter and you can get a hug range of valid responses when asking questions about how to value an IDS system.
more reasons why businesses are insecure
Anton Chuvakin linked me over to an article about 7 reasons businesses are insecure. Check the reasons, as they are good ones.
I wanted to add a couple more, however.
8. Economics. Let’s face it. Security costs money and time for a company, and unless there is regulatory or economic reasons (or surplus budget!), a company really won’t spend more money on the security. Companies are economics entities and as such work to maximize their profits. Some people don’t like to talk about that, but that’s reality. And this works not just on a macroscopic level with budgets, but also on a microscopic level: do your IT techs prioritize security projects behind business-facing projects and fires? Yes, they do. Doh!
9. Technical gulf from the trenches to the upper offices. When a CISO proclaims his company secure, most of us snicker a bit and throw back another shot of JD. When a CISO proclaims his company is in compliance and has a strong security process, do you really think he knows what the hell he is talking about? Or is he just playing salesman-lipservice and really has no clue if the company geeks really are making things secure? Often I wonder about that gulf between the techs and the upper offices and which reality each is living in day to day. Some CISOs Get It and know their environment, but I think those with a Clue are still in a huge minority (not necessarily because they’re not technically proficient, but simply because sometimes they are just too removed from the day-to-day).
9.5 Likewise, does your audit/security team have the skills necessary to tell the difference between secure and insecure, or are they just going over a checklist and then going to lunch? Technical expertise in regards to security is spotty in the technical ranks, especially on a broad level. I believe that more efforts in user education should be pointed towards technical staff (security and general IT) and not towards general employees.
on the art of balancing awareness and technological security
I like Kurt Wismer’s post, “the user is part of the system.” This is true.
I’m often misunderstood when I take a stance against user awareness types; often I’m taken as being totally against user education, when in fact I am just against over-emphasizing user education as the way to achieve security. I don’t agree with that, and I think user education is like compliance, it educates the lowest denominators in a corporation, but it won’t stop malicious activity or mistakes. It helps eliminate naive or ignorant mistakes. (Ok, I’ll give that some people will greatly benefit and listen to awareness, but that simply cannot be all people.) A blend of awareness and technology is what I feel is the key, although I’ll put just a bit more weight on the objectivity of technology… I mean, there is a reason social engineering always works, even with obscene amounts of user education.
I’m a firm believer in technological controls to mitigate the stupid choices that users can make, or simply limit what they can do. Taking this to an extreme is just as bad as taking user education to an extreme: we can create a nice, tidy, restrictive, safe cage for users to sit in and do their work. But is that cage going to make that user happy and productive, or docile and uncreative? This can lead to a discussion on where security should lie: the system, or the network. Some may say the system is already lost because we can’t make it a stifling cage…not without affecting our users greatly.
It seems that having freedom of choice is a fundamental part of the human condition, even to the point that we all bend or outright break rules every day, such as traffic rules. If people bend or break those rules when it has very real, obvious consequences, how do we really think users will act regarding our own company policies that are much more arcane and the threats far removed? Are your users ultimately more happy having admin rights on a system or having a set cache of programs they can use and nothing more?
Is this maybe one reason the web has become so enabled in the last few years? We try to control what they can do, so they use port 80 and a web browser…is the desire for choice and freedom always going to trump our smaller, user-impacting security approaches?
That’s really part of the art of corporate security; finding that balance that works. It is also the unfortunate part of our industry: no one standard is going to work. One person’s approach won’t work in every situation or every corporation. More so than the thousands of solutions each company can have to solve various needs and problems, the users are even more varied and unique. Ok, fine, very general rules will work, like “patch your systems.” But let’s face it; that shit is the easy part, the part any arm-chair analyst can recite.
Nonetheless, I love such discussions, even if there is not ultimate agreement. At least we’re talking about it and being open to creative solutions. I’d almost rather talk to open-minded people who don’t have an answer to these problems than those who think they know some Merlin-esque answer to solve all our problems everywhere…
dlp and database activity monitoring info from mogull
Someday I will likely need to sound smurt about DLP, even though I think it will be a feature and not a market given a couple years. And then, of course, it will just get watered down and slowly forgotten over the next 5 years. But, still, it’s a buzzword with mgmt.
So for my own future edification, this post is a pointer over to Rich Mogull’s 7+1 part series on DLP. Part 7 includes links to the other 6, and the +1 is an overview of the recent trend of DLP acquisitions.
And just because I don’t want another post, Mogull also has information in a 2 part series about database activity monitoring products. Part 1 and Part 2.
a little bit of age perspective
It is difficult to get a sense of someone’s age in this digital world, so I wanted to take a quick moment to let any readers know that I turned 30 yesterday. So, yes, I can talk about Star Wars and only think about the first three movies.
ebb and flow
“Sun and moon travel through the sky, they set and rise again. The four seasons succeed one another, flourishing and then fading again. This is a metaphor for the interchange of surprise unorthodox movements and orthodox direct confrontation, mixing together into a whole ending and beginning infinitely.” -The Book of War, Chapter 5: Strategic Advance.
sometimes developers just aren’t playing for the same team
This is the kind of stuff that makes us admins infuriated at developers. Just to illustrate, pretend we have 3 testing environments and then Production for a web app. Env1, Env2, Env3, Prod. It is expected code will be moved up through those environments sequentially.
Developer Ted: I am rolling out code to environments Env1 and Env2 at the same time.
[a few hours later]
Developer Ralph: Env2 is broken and my coworkers and I can’t get anything done. What’s wrong?
Admin Mike: The code rolled out to multiple environments earlier broke things and made the environments unstable. We’re working to fix this now. Talk to Ted who rolled it out in multiple places all at once to not do that in the future.
Developer Ralph: But I need to work now, and so do my coworkers. We’re going to start doing our work in Env3.
Admin Mike: [blank look knowing I don’t have authority here]
[short amount of time passes]
Developer Ralph: I need support in Env3 because it is not working properly now.
Admin Mike: Well, some of the stuff you moved up shouldn’t have been moved up and that environment is borked now and we’ll have to expend more energy to fix it.
Developer Ralph: But I and my people need to work, should we start moving to test in Production?
At this point strangling the developer actually seems like a plausible mitigation to further destruction and downtime…
powershell working with time objects
I have a perpetually running powershell script which is always looking at a text file to see if an install is scheduled to run within the next 2 minutes. This text file just contains a list of times when installs should run (or nothing). I want this install to run every night at 12:10 am. To do this, I need to make a list of the next 100 days’ worth of 12:10am entries.
$basetime = get-date “11/15/2007 12:10 AM”
[array]$times = @()
for($a=0;$a -le 100;$a++){ $times += “$($basetime.AddDays($a).ToString()) both” }
$times
11/15/2007 12:10:00 AM both
11/16/2007 12:10:00 AM both
11/17/2007 12:10:00 AM both…
This gives me a list of 100 strings that can be read into get-date as a time/date object!
$blah = get-date $times[3].replace(” both”,””)
Why the hell is that “both” part in there? Well, that’s something just for me, which describes the install that is occurring. When evaluating schedule entries, I replace those off and trim the string down. Why do I want to read this into get-date again? So I can do better compares!
$objScheduleTime = Get-Date $blah
if ($objScheduleTime.GetTypeCode() -ne “DateTime”)
{ “timedate is invalid” }
else
{
$TimeDifference = $objScheduleTime – (Get-Date)
if ($TimeDifference -lt 0)
{ “time is in the past” }
else
{ “time is in the future” }
}
First, convert $blah into a date-time object, then check the type code to make sure it converted correctly. Incorrect conversions need to be handled and not continue as a null object, or the rest of the script will complain. As usual, there are plenty of ways to do this, but this makes sense to me.
soccer goal security, risk analysis, and more from an auditor
I hesitate to post this link which I gleaned from Anton Chuvakin’s blog, because it has a lot of hard sentences to read and rambles a bit, but it has enough stuff to be a bit thought-provoking. Anton Aylward’s post deals with soccer goal security, but touches on a ton of things involving security.
In his marvelous 1992 novel “Snow Crash“. Neal Stephenson describes a franchising system and makes reference to the “three ring manual”. This manual is the set of operating procedures for the franchise, who does what and how, down to the smallest detail. I mention this in contrast to, for example, some of the businesses that failed after 9/11. These businesses did not have any ‘plant’ – desks, computers, software, even data – that could not be replaced. They failed because their real assets were not documented – the business processes existed solely “in the heads” of the people carrying them out.
The real assets of a company are not the COTS components. This is a mistake that technical people make. The ex-IBM consultant, Gerry Weinberg, the guy who came up with the term “egoless programming“, also pointed out that people with strong technical backgrounds can convert any task into a technical task, thus avoiding work they don’t want to do. Once upon a time I excelled in the technical side of things, but I found that limited my ability to influence change with management.
Interesting stuff. Anton A. is an auditor, and as such has a unique perspective on the industry. It is easy (maddeningly easy) to point out the flaws in other people or businesses or processes, and no one does it better than auditors. Kinda like IT journalists who can spout off best practices and “told ya so’s” but don’t know anything about IT beyond their home office 10-in-1 fax printer. Ok, that’s unfair for the auditors, as they do have more usefulness and knowledge, in my books. 🙂
powershell and active directory searching
I’ve been doing some more work using PowerShell for small ad-hoc types of scripts. Basically I keep some notes around, and adjust those notes for what I need at the time. This works great when I need to query certain things from our Active Directory. While we use AD a lot, only my team uses it, which means it gets messy and out of sync quickly.
A recent request needed me to pull all the supervisors and managers in our company. Odd, but no one keeps a list of these, nor do we have neat groups in AD to accomodate the request. Great. I could, however, pull out everyone who is listed as having a “direct report” in their AD account, which is something the desktop techs *are* good about updating.*
$objADSearcher = new-object DirectoryServices.DirectorySearcher([ADSI]””)
$objADSearcher.filter = “(&(ObjectClass=User))”
$objFoundUsers = $objADSearcher.FindAll()[array]$objADUsers = @()
foreach ($t in $objFoundUsers)
{
if ($t.properties.directreports)
{
$t.properties.name
$objADUsers += $t
}
}
This snippet will search out all user accounts in AD and display the names of those who have direct reports. Further properties on any given account can be found by doing a .properties to it, .e.g $objADUsers[45].properties.
I’ve also had a need to quickly find all the members of a group in a way that allows me to copy and paste the results.
$i = “Supervisors Group”
$objADSearcher = new-object DirectoryServices.DirectorySearcher([ADSI]””)
$objADSearcher.filter = “(&(ObjectClass=Group)(name=$i))”
$objFoundGroup = $objADSearcher.FindAll()
$objFoundGroup[0].properties.member
This will display the result of the search for Supervisors Group. If only one object is returned, I often forget that I still need to reference it by index[0].
Now, if I get a user back and want to connect directly into their AD object, I need to leverage the path property.
$ADSPath = $objFoundUsers.path
$container = [ADSI]$ADSPath
$container.manager
$container.directreports
* I am positive there are many ways to accomplish these tasks, and I may not be doing the most optimal method, however, this method does work for me for now, until I find some better way.
rant on the economics of disk storage and business priorities
The economics of IT are always going to be a pain point. Sadly, such penny-pinching when it comes to IT spending can result in some pretty creative issues. This is just a small Friday rant from work, so read at your expense!
Today we had a web server D drive fill up (the drive with our data), which caused some errors to start occuring on that server. This filled up because the log files weren’t getting cleaned up. We didn’t get alerts because our web servers run on such small disks that we were getting constant reminders about low disk space, so we turned them off as no one would pony up for more space. *
The log files weren’t getting cleaned up because a separate web log processing server’s disk was full and couldn’t pull the logs in anymore. This filled up because no one a) wants to make a policy on how long to keep log files or how important they are, so they are kept forever, and b) no one wants to look at the criticality of the server and assign a dollar value, which can then be used to offset costs for more storage. So it stays with the disks it has.
So a non-critical system that can’t get more storage due to penny-pinching caused an intermittent production outage on a system that itself is running on fumes because no one wants to put out for more storage. Capacity planning and budget submissions are one thing, but as much as we do them, the exec/business side continues to say “No thanks,” to the expense.
Ugh! I understand this can be a way to go for companies, kind of a JIT of disk storage, but it really, really helps to be up front with that policy so IT staff doesn’t have to constantly work in a “worry/told you before” sort of mode all the time. It’s just not important until it brings down production and clients notice. Sounds awfully similar to security!
* I love the little side risk to this practice. Developers can put out code quite easily enough on their own to fill the disks and cause web servers to all die in production. And even if intent isn’t there, we do run the risk that someone will accidentally publish something large that effects a DoS.
misunderstood hushmail hands over mail records
I’m still playing news catch-up, but I was drawn to this Wired blog post about Hushmail handing over mail records. This is a confusing article, quite honestly.
First, I will swear that Hushmail has been offering webmail service far prior to 2006 as mentioned in the article. I’ve been using them off and on for many years (both free and pay accounts), and definitely prior to 2006.
Second, I’ve never been aware of any sort Java applets or encryption when doing mail with Hushmail. Maybe this is just in the commercial version, but I suspect it really only works with email sent to other Hushmail users or recipients forced to log into Hushmail to retrieve the mail.* I can also attest to never, ever having to supply any passphrases, only the password to my login. So this whole encryption thing with Hushmail is a niche that I would be willing to bet few people truly use or were even aware of.
Still, Hushmail seems a very misunderstood service, as they market to security conscious people as being anonymous and private, when in fact it really is no less private than Gmail, unless you use their annoying and “non-solution” tools (and as the article demonstrates, even that isn’t solid). I personally just liked having the anonymity, as opposed to the privacy.
If someone were truly paranoid enough about their email privacy and anonymity, they are much better off scouring the net for open mail relays, using pgp, and then sending through an ever-rotating list of relays to their recipients. This protects the message in transit, spreads out your mail to such a degree that no one can form a profile of you, and hides your own originating information. And even that doesn’t protect your address unless you use rotating and/or disposable mail addresses…
* I really don’t agree with that approach to email security, and most people who use it really hate the annoyance of having yet another web site to get mail, rather than it coming to their own mailboxes. And yes, we have a secure mail solution that does this, but users both internal and external either don’t understand how to use it or actively hate it and try their damnedest to work around it…it’s just a terribly lame approach. What really sucks is marketing who then tries to say they secure email with encryption when I damn well know they can’t unless it never leaves their servers. Such misleading garbage that sucks in less-technical purchasers..
tool and book releases in my inbox
There have been a number of things released or updated recently that I want to try out, update, or read. Typically at work if I see new things, I’ll send notes to myself at home on my gmail account, but lately this has been getting jammed up as work has been insane lately. So I’m offloading some of the quick notes into blog posts…who knows, maybe someone else will likely these too!
OSSEC 1.4 has been released. This is still on my short list of projects.
IDS Policy Manager 2.2 has been released. I’d love to check this out, but I need to get my Snort box fixed at home.
fgdump 1.7 is out. fgdump is a utility for dumping Windows passwords, aka using pwdump more successfully and remotely.
Saw a note and placed an order today for a copy of Michael Rash’s latest book, Linux Firewalls: Attack Detection and Response.
Nipper 0.10.8 is out. Nipper can perform security audits on Cisco device configs.
proper education against werewolves?
I just wanted to capture some words from Bejtlich for my own preservation here because they rock. Feel free to take both sentences as wholly different subjects.
Forget about user education; I recommend management education. Deflect silver bullets.
If you want to read the post this was taken from… A-fucking-men. We can’t expect business and users to Get It if our own IT staffs and managers don’t Get It.