still settling in

A lack of updates should be followed by a slew of posts after the first of the year. Right now I am porting over all my old Blosxom posts over to this site, flagging them to put in my “being built” wiki, or just removing them as I figure out how to best leverage my sites. I will say that I really enjoyed the simplicity of Blosxom, especially to use as a blogging/site tool without wanting a true database backend. It was very slick, simple, lightweight, and kinda fun to work with. Unfortunately, it is not quite as robust as a true CMS/blogger. Honestly, I think the worst part about it is just being locked into something a little different and non-mainstream. Over time, who knows if there will be new features or support, and I’d hate to find myself 4 years and 2,000 posts into the future with a huge migration project to something more mainstream.

Overall, though, Blosxom is awesome, and I hope someday I can possibly find a use for it.

weekend projects done

I didn’t get to play with SpamAssassin yet, but I did get a lot of other little things accomplished this weekend in regards to my site. I installed hMailServer and ClamWin so that I could move my mail server over to the new box. In fact, I went a step beyond my plans and am using OpenSSL and stunnel to allow SMTP and POP over SSL so that I can check things remote from a wireless hotspot. I also moved my Ventrilo server over and did some housekeeping on my websites; busywork that I’ve been putting off for many months but that only needed to be done once to be done for good.

With all of that aside, I’m looking forward to SpamAssassin sometime this week or next weekend, and to work on my wiki site as well.

Every time I work on my sites, I get that familiar bug to learn up a new web language and get really good at it. I love reading people like Jeremiah Grossman and RSnake, guys whose web skillz I really respect and appreciate. But I do know that takes significant dedication and time, and I know that I can’t specialize in everything right now. Maybe someday I’ll have an opportunity to go down that road, either for my job or in my free time once I get other things under my belt. Anyone can learn web coding, but to do it well and know the little “expert” level tricks is definitely where I would want to be, and that takes significant time. Besides, right now, web technology is simply not securable anymore. Unless you want a fairly static site with little integration and scalability, security is just not possible these days.

not only have criminals matured, but so have security pros!

Ten years ago, it was still common to refer to hacking groups by their creative and rather dark names like Cult of the Dead Cow, and handles like Master of Disaster. These days, hacker criminals (note the use of the adjective “criminals” to quality an otherwise non-negative “hacker” noun) have matured their practices from just being curious and annoying and destructive to being profitable. But so have the security pros. While there are still people like Major Malfunction and Phenoelit around (and many, many others), just look at my list of links to the right, especially the blogs. I now have more real life names than I would ever have had 10 years ago.

That’s not to say hackers do not have witty handles anymore, but there is that maturing going on in all facets of this industry. Curious, if nothing else. Me? I like the ability to use a handle to protect my identity online just a little bit more. Games, forums, IRC, IM…everything still asks for a unique username, so may as well blend that in with my industry handle. Better than being Avengerr26078 or Neo643389x!

removal of links and rss feeds

As Adnan recently realized, I too am finding that I have too many links and news and blogs to read, which steals away my time. I am almost feeling like an analyst, talking and reading, but never actually doing anything. So I’m pruning some more links and RSS feeds. As usual, I’m posting the “death” list here, just so I can reference it again at some other later time.

I was going through this list and removing people and looking at sites, and it makes me kinda sad to remove some links and blogs, especially those to people who might still be around, but don’t post every day (or even week) or might make posts that I’m just not interested in. I got into using computers and stuff by being social online in AOL chat rooms, then later in IRC and forums. This culling of links saddens me because I know all of the authors and I share common interests and I love seeing how they present themselves online; in this sort of second world avatar image. Oh well, life goes on, and I hope it finds them all happy. Of course, with this huge list of outgoing links, someday soon I have a list of incoming links as well.

WBGLinks.net was originally a huge list of white, black, and grey hat links to many other topics and sites. It since has disappeared. Wintermute has also had little to say lately. Dam Kaminsky has excellent tools, presentations, and very creative ideas, but his blog is not the place to read them. He is easily Googled anyway. The guys at Checkmate only update once a month, and if they offer up something useful enough to read, I’m sure I’ll get linked to it from elsewhere. I always hoped TheSecure.net guys would come back and keep posting, but not only did they go on hiatus for a year, but their site is now gone.

Adminspotting had a fairly short, but informative life and is no longer updated. I’ve long hoped the author would post his new idea mentioned in the blog, but he has not. Maybe someday. Adminfoo’s provider seems to have had some data recovery/corruption issues which has left this site down a while now. Backups. Reading the linked host’s status page is pretty much a story all IT admins dread: corrupted data and customers getting upset. Oddly, HERT (hacker emergency response team) seems to be down or gone.

Nitesh isn’t around. The Microsoft Security Response Center blog is really not that useful, and when it is, other people link to it for me. Besides, with something as important as that blog could be, they will always be regulated from inside. OpenPacket.org is an awesome idea, but I suspect everyone who thinks so is just too busy doing other things as well. I’ll link it up if it ever truly opens. Arved has been removed. The Geekpit has been removed. I’m not even sure what Infosec Daily is anymore, but I think it aggregates other sources I already track and doesn’t look very pretty anyway. Insecure.org is not a news site and belongs under tools/resources. Of course, it’s already there! SecurityWonk has disappeared. Also removing SecuritySauce. Nepenthes is a tool, and didn’t belong here anyway. Kaosx has been removed. Jon Ellch’s site was never really meant as a news/blog site anyway.

weekend projects

Hopefully I can finish my one or two weekend projects I need to work on this weekend. Tonight will be spent playing Warcraft and Saturday night drinking, playing video games, and talking about hacking. That leaves Saturday afternoon and Sunday to work on getting a new mail server set up on my server along with a Spam Assassin install. I also need to point my new domain to this site and fix the inevitable pointer issues in my code.

I’m not really looking forward to Spam Assassin. While I’ve never done it before and really want to learn it, all indicators point to it needing a bit of work and babysitting to be worthwhile. Oh well, may as well start this weekend and slowly work on it, kinda like securing Apache and mod_security.

I’ll try my best to provide a report on here about my experiences with hMailServer and SpamAssassin on my Windows box.

death by a thousand cuts…the details will kill us

UCLA just announced the disclosure of private data on 800,000 persons. I find it disturbing that the “attacks occurred between October 2005 and November 2006.” I almost suspect that that is only as far back as backup tapes and/or logs go. And there were multiple attacks? I would be willing to put money down on the detection being accidental on the part of the network admins. Maybe someone just looking at something they normally don’t, or seeing something odd when troubleshooting an extraneous error as opposed to an IDS barking alerts or alarms going off or the attacker(s) being noisy.

Information security and insecurity isn’t going away, and it is very hard to ultimately protect juicy targets. IT is understaffed and underbudgeted. We complained about this 8 years ago and we still complain about it because technology and information have grown along with staffs.

We also have an inability to share information. We work in an industry that cannot disclose details without the very real fear of lawsuits. But we desparately need to share this. We need to share what broke down in UCLAs detection strategies. We need to share how they learned of the incident and investigated it. We need to share the goods and the bads, what works and what doesn’t work, the internal political barriers and the champions who push through them. Otherwise this issue just cannot go away and we’ll only have analysts and journalists telling us (and our management) what “should” be done with absolutely no regard to the feasiibility of those measures. (Of note, I love analysts/journalists telling companies how easy it is to encrypt full hard drives just because they were able to encrypt their own hard drive once, two weeks ago, and then didn’t like it and removed it…)

If we are to start making headway, we need the details. Otherwise the details, in their silence, will kill prevention.

how much longer will open source last?

Open source software is considered by many to be the untainted version of freeware available on the web. Far too often, “freeware” packages in other smaller programs, from announced installs like Google or Yahoo toolbars to unannounced installs like spyware and adware. Open source is a much, much more trusted “standard” for web surfers to download and install programs while sleeping easier at night.

But I wonder how long such trust will last. I download and install open source apps regularly, and in fact, unless I know the application well I don’t install a closed source app when open source has alternatives. But do I look at the source code to check and make sure some spyware app isn’t packaged inside it? How many other people compile the source themselves, let alone truly understand the code enough to feel safe? And if someone with programming knowledge does this, will he be able to let the rest of us know and “out” the application?

Right now we (I) have blind trust in something deemed open source, and maybe a little more trust in something open source available on SourceForge or through a package manager, but there will someday come a time when even open source is not safe from the little things installed by determined marketers. What if an application is only really “safe” if manually compiled from source, but the compiled binary version has small print in the EULA hiting at additional software…?

hoping ISPs are not going to tackle the botnets and zombies

There is more and more talk of people (typically people that just talk about things, i.e. analysts, as opposed to people who really *do* anything) wanting the ISPs to take up the battle against botnets and zombies. Personally, I feel that if ISPs are going to be forced into taking care of things closer to the end-user or that affect the end-user (either through detection and/or shunning after a threshhold), they’re going to go balls-out and go farther than I, as a consumer, want them to go.

It is already difficult enough to shop around for an ISP that gives me a static IP (or at least very low turnover dynamic), allows me unfettered incoming and outgoing ports, and allows me to use my own mail and DNS servers as I see fit. I don’t want that crap done for me. And I don’t want to pay for business-class service. But if I were an ISP forced to go this route, forced to tackle a layer in the communications that I wasn’t really supposed to tackle (this is like asking the physical layer to protect the sessions), I would make damn sure I log everything I can and get as far as I can and as thorough as I can before consumers start decying privacy issues and freedom of service. This is a ball I do not want to have started rolling.

Besides, I don’t really think ISPs are going to dent that particular problem right now. I’d rather they were left to focus on what they do best, and provide me with uptime, reliability, and faster circuits. I don’t want to have my system shunned (loss of reliability) because one of my neighbors can’t stop visiting infested porn pages or out of the blue if it is my system affected.

But yes, I do think security will still head towards the switch, only the switch will be inside corporations and inside the user home.

on security workarounds and knowledge

I am often amazed at some of the solutions to security problems that some organizations and people implement. A mailing list situation recently came through that had a web-based system developed to “hide” the URL bar from users so they couldn’t see and/or manipulate the URL. This is almost certainly to obfuscate sensitive data in the URL and possibly avoid risk from manipulating that data (the classic www.domain.com?price=199 variable which can be changed to change the actual price). Now IE7 is out which forces the URL to be displayed. Kind of defeats some of that purpose, no?

Other times there can be some very creative ways to deal with security issues. SMTP “security” can be achieved by capturing emails with “SSN” in the body and saving them on the mail server for pickup by the recipient party. This really does not fix anything in SMTP or email, but rather just changes the path of the missive. Sadly, this is usually pretty annoying from the recipient’s point of view.

These are sometimes just patches and workarounds to the real, deep issues of security. In the first example, the app should have been rewritten to display a sanitized URL. In the second, figure out a better way to utilize email or try to re-invent SMTP (hard sell, that).

I’ve found that there is an endless supply of creative and work-around ideas in the field of security, and I think a large part of that is a function of the skill in the field. As more and more auditors (people who check lists…), non-geeks, and barely competent IT support persons move into this field, the talent and skill gets a little bit more and more watered down. Instead of understanding the nuances and/or realities of a tool, too often shallow knowledge gives way to sometimes ill-conceived workarounds and obfuscations of issues.

It truly does take a technical and deeper knowledge to effectively and quickly determine security responses and measures (or how to beat them). Someone cannot take a position to secure DNS without understanding how DNS works. Likewise, how do you secure applications that depend on DNS when you don’t even know DNS itself?

Web applications are teeming with this issue. If a developer knows how to program security into the product on the fly and codes with security in mind, that is a huge benefit to the developer who only knows how to make the functionality work (sometimes in equally ill-conceived ways), but then has to spend tons of time trying to boly on security down the road. Knowledge would save time and money.

I think this is where a lot of bad security comes from, just a simple lack of expert level knowledge. This itself is tough to achieve anyway, as a security guru tends to be seen as a cost, not a value-add. They add value by also doing network/systems administration, which tends to trump security when push comes to shove.

And while budgets, poor management, poor decisions, and other things influence one’s ability to be educated and/or implement solid security endeavors, I still think being an expert in the basics goes a long ways. Why implement an expensive NAC solution when you can drop in an old box running arpalert (free) and check for rogue machines that way? Why spend hundreds of manhours on limiting exposure of an application on the network when you can ensure your code can withstand fuzzing attacks?

This isn’t the only reason we have insecurity, obviously. There are time issues and often pressures from outside the competent developer’s control. And there is much to be said about defense in depth by doing everything one can to make a more secure product, but I still believe the basics are what comes first. The obfuscation needs to come after. The creative workarounds that could be obsolete next year need to be second.

The future is still going to remain with open source tools and creative ways of being an expert with the basics. Not on spendy and fancy workarounds that too often miss the real points of insecurity or create insecurity itself. Besides, even something as epidemic as XSS is not a difficult issue to either exploit (usually) or prevent. This is basic stuff that we’re still struggling with.

(On a flip side, I find it equally as bad to be both complex and an expert in it, as that means only you have the knowledge to make things work…complexity begets complexity begets less security…)

decrypting wireless packets

I made a few discoveries this weekend. First, a wireless access point has popped up in my neighborhood recently that is not encrypted, as a quick test of Netstumbler showed me. Second, my newest used laptop appears to be equipped with an Atheros card. Oh joy! I might just have to dual-boot that guy into Linux!

I hopped on the wireless network to poke around, but the Netgear AD password had been changed, and the one other system on the network was sending very few packets across. In fact, all the packets I picked up, with few exceptions, were not being decoded by Wireshark properly. They keep coming up as a Belkin MAC and something about broken packets. I’m wondering if this is something like a Netgear/Belkin combination using proprietary “speed-boosting” which is mucking up the packets. I fired up the newest Cain as well, just in case something interesting flew by.

I’m not really sure since I’ve not seen it before, but I’ve left the laptop on the network and will check it out over the next week or two. I do have an Internet connection through it. Windows Network Neighborhood gave me the computer name which happens to be a girl’s name, and the AP SSID was a last name. Tonight I need to check what IP I have so I can get the service provider and IP to do some external testing, although I suspect I won’t find anything useful. Given some Google searches and any possible traffic that I can decrypt, that is quite a bit of information to leak already.

At any rate, it is fun to have a spare system that I can just dedicate to wireless stuff. I’ve been wondering what to do with the system, as it is a little too big to properly carry anywhere (about 10 lbs and only fits in my backpack) for real portability, especially since I have far lighter systems. But now I think I have at least one use for it as a wireless workhorse.

terminal23 is born

I think I have my new “geek” blog ready to roll finally! The last step was to decide on a name for the site, and I settled on Terminal23 for my own reasons (nothing interesting, really). Now I can start porting over my Blosxom blog entries as needed, and get caught up on posting news and such. I really liked Blosxom for its simplicity and elegance. I would have stuck with it further, but I think I just wanted something new and I needed to update my blog application anyway on my personal site.

I do still need to get the wiki up and running, but that will take a bit more time and love. For now, this project has already exceeded my goals of being done by the end of this year.

are we there yet

I’ve seen a few “wide scope” posts lately about the state of security, but this one has some of the best points in it, and presents them very well. Mostly I just want to save this for my own use in the future.
Just one comment on it. Items 14 and 15 talk about how we cannot seem to agree, as a field, on best practices. Those posts are illustrated in item 2 on disclosure practices. Many of us understand both sides of the equation and even the grey area in between, but yet we still fall on all sides of the debate. Sometimes there is really no universally correct answer…especially in such a complex field as IT and security.

sysadmin of the year

The first Syadmin of the Year awards have just been announced. While half the stuff said is likely embellished and it is just a little pat-on-the-back kind of site, I just thought it was interesting to see what these guys did that made their peers and co-workers nominate them.
I also would like to note that not one of them (with the Air Force exception) is wearing a tie to make them work better. Nor do any of them work for recognizable large-type corporations. These guys just plain “get things done” as opposed to running the gamut of business politics. And I would be willing to bet that every single one of these guys actually truly loves their job and company. Happiness == productive == successful.

why security will move to the network

Saturday saw me working most of the day on some productive stuff. On my test server I was finally able to install compatible and updated versions of Apache, MySQL, PHP, and Perl along with a new version of Movable Type. And I got it all to play nicely and properly render my website in full. Finally PHP and Apache2 ironed their issues out and I can proceed with my upgrades.
But this reminds me of the futility of trying to maintain a server and network in terms of security, and I don’t even have all that much stuff installed.
My old server is a Windows 2000 Pro system that would have a 100% uptime if not for power outages and apartment moves (and update reboots). It runs MovableType1.2 I think, with 4 year old versions of PHP and Perl, a year old version of Blosxom, and Apache 1.3. I’ve not really updated the system itself in about 4+ years. That’s heavy! And I’m a paranoid security guy who truly does know better!
Now, I will have updated versions of all of that, including a Wiki program. This means I need to keep up on:
– keeping my installs updated by applying any patches or security upgrades
– keeping my code secure by knowing how to program in a secure fashion
– remain an expert with all those technologies so that I know how to properly secure them
– for every update, be able to test it before putting into full production, or be able to spend time to recover when something royally screws up
– maintain resources for backing up the important stuff
My environment has two systems and a half dozen apps and a single OS version. And yet that’s already a very big list of tasks up there. Just think how tough it is to be secure on a corporate level where every department has their own desires, software and web developers use their own systems to host things when they don’t get their way on servers, and so on. It is no wonder that there are thousands of vulnerable forum sites out there running unupdated software. You can just wince when coming across an old forum site whose last 6000 posts are spam ads for Viagra.
Now, imagine turnover in the IT space. A 5-year vet of a company leaves and takes a heck of a lot of institutional knowledge with her. That might mean some systems have unknown software installed that no one else knows about, and no one else can manage. Imagine those things being used by someone for something critical, and by the time some issue arises, the company that created it (or internal developers) are no longer in business and support is not possible. Legacy is a very apt word for what we will go through as the years go by…legacy apps, systems, serers. It’s not just hardware we need to worry about anymore, or bulky mainframes in basements.
Unless a corporation is very diligent and very controlling (which everyone resists vehemently), the application layer is becoming a lost battle. It is enough for IT divisions to attend to downtimes, connectivity, and failed hardware, let alone to stay abreast of the latest news on package Y or application X. And we can rest assured that we’re also losing the battle of getting security packaged into software from the start.
As a side note, just think how much knowledge a tru security pro needs. Not only do I need to know how to install and secure Apache, but I need to know how to break it too. I also need to know what is bad in php (i.e. I need to know how to code it). I need to keep up with all those areas and updates. I need to have intimate knowledge not just of the OS, the apps, and the code, but the countless interactions between those two… Very, very heavy…