ccc23 and a new wiki

The Chaos Communication Congress, now in its 23rd year, has always been one of those conferences that gives me goosebumps to think about the innovation, creativity, and genius all packed into one place for a short amount of time. I enjoy watching many of the presentations after the fact as they are quite open about distributing them. They feature some amazing ideas and technologies and tend to be a bit more open about challenging governments than US cons. One of this year’s bigger attractions is RFID tracking. I think it will be interesting to see tracking being brought more and more into our mainstream thinking. Much like the ipod+Nike revelation recently.

Also, DNS should be propogated by now for wiki.terminal23.net. Mediawiki freaked out last night when I changed the URL and Virtualhost for the site, but a quick reinstall made it happy again. I don’t have much there and it will just be intended as a resource for me to track tools and tutorials, but I have started moving down that road enough to link it up from here.

as the worm turns

KListon over at the SANS Handler Diary recently posted about worms and how we won’t see an SNMP-borne “Slammer-like” Internet worm, or maybe even any worms like Slammer, despite the opening given by MS06-074.

I think he is somewhat correct. The Slammer worm exploited SQL instances and caused a huge amount of havoc because of the unintended effect of flooding most networks with packets, to the point that they were unusable. From worms like this, authors have learned that if they want to have a good worm, you don’t want to overload your own pipelines. Rabbits may multiply like nothing else, but once you get 5,673 of them stampeding over a bridge to get to new food sources, the bridge will collapse and they’re all dead in the water, so to speak.

I think kliston’s best point was the oddity of tons of tcp 1434 ports open to the world. This defies the common sense that administrators of today have, where databases are (should!) be nestled deeply inside the network behind a few layers of protection between it and incoming Internet traffic. Firewalls have been built up quite a lot over the years, and I think many networks are much more resilient to network-borne worms coming from a public network. Unless something is able to pop apps on commonly opened ports (we’re probably looking at IIS/Apache, sendmail/IIS, SSH/telnet, BIND…) that are widely used, I don’t see any major outbreaks on the horizon. What we’re then left with would be widespread apps running on IIS/Apache (Web 2.0 or common packages like phpBB) or perhaps IM propogation should something in a message be able to pop the app. And of course, some discovery in Cisco equipment could be catastrophic as they make up more of the bricks in our perimeters.

Now, that may nicely cover Internet-borne worms attacking over the dangerous public networks, but that is not to say there won’t be pockets (sometimes LARGE pockets) of an SNMP worm. Even beyond the heyday of the Slammer worm there were still terrible outbreaks as laptops took hold and developers moved offsite with Slammer-susceptible MSDE instances. Once back into the comforts of the home network, such instances gobbled up any unpatched systems and vomited out onto the network wires. Similarly, an SNMP worm can piggy-back inside a network as well, or be delivered via email or other means. Once loose inside a network, it can still have a catastrophic effect for locally.

I have heard often that the network perimeter has disappeared. I disagree with that. Our networks have simply become more ephemeral, kind of like the kids starting to play outside the house and getting dirty by dinnertime. The house is still there; the perimeter is still there. I imagine as ipv6 starts to get realized (someday?) the calls will arise to do away with NAT and the perimeter once public address-space is again limitless. But, of course, that would pave the way for worms to come out of hibernation, so I hope that the perimeter is going nowhere even with ipv6.

Kliston’s third leg mentions something lots of people have repeated all year long: malware authors have become more interested in profit than notoriety. Well, how about being paid to disrupt a competitor’s network? And you just happen to have the ability to create an SNMP worm? And what if that competitor has poor network design and utilizes SNMP on his internal servers, and has a long cycle before those servers get patched? You might be able to realize this financial gain by sending your worm packaged into an attachment over email or perhaps scatter some USB flash drives in the parking lot (with eye-catching glitter-bits painted on to attract attention) with the worm autorunnable. All it might take is one execution and bam, their servers go from the same ol’ grind to being tickled lightly to flat out all raising a new flag of ownership. Dramatic, yes.

Or, hang out at a local wireless hotspot that the employees frequent. With their laptops. Once away from the hardened corporate network, those devices may be ripe for the picking…and planting of a worm. Maybe corporate epionage is already here, but I suspect it will continue to get worse, whether the media picks up on it or not.

html in email

Maybe I am a bit old-school already, but I like the sound of this news post:

Due to an increased network threat condition, the Defense Department is
blocking all HTML-based e-mail messages…

The JTF-GNO mandated use of plain text e-mail because HTML messages pose
a threat to DOD because HTML text can be infected with spyware and, in
some cases, executable code that could enable intruders to gain access
to DOD networks, the JTF-GNO spokesman said.

In an e-mail to Federal Computer Week, a Navy user said that any HTML
messages sent to his account are automatically converted to plain text.

This is one of those battles I resoundly lost in my last job: forcing Outlook to display emails as plain text. I’m one of those people who sees absolutely no need to make emails look pretty with embedded pictures. Marketing and sales think otherwise, of course. As far as my own emailing habits go, I’m pretty strict about making my outgoing emails all plain text, and most incoming mail plain text as well. You eliminate huge swaths of attacks by turning off HTML rendering in email programs…enough that really you’re left with sheer stupidity in going to links or running attachments, and you avoid all that hidden junk with javascript, remote calls, and misleading links.

If something needs to look pretty, put it in an attachment or link to the website inside the email body.

schneier

I read Bruce Schneier’s weblog on a pretty much daily basis, and I truly appreciate what he brings to security punditry, especially things outside of strictly network and computer security.

But the more I read from Bruce, the more I am convinced that stories he points out will be forever and universal. There will never be any type of security that relies on people that cannot be circumvented, even if by accident, one time out of 1,000,000. It fuels people like this because the stories will never go away. People will always make mistakes and someone somewhere will point it out and make everyone else cry that we should have 100% perfect security and spend more money to get that last .01% failure rate removed. That’s just not always realistic. The effort is nice and I do appreciate his efforts to keep people from being blissfully ignorant about what security really is versus the perception, but he is like sugar to me. Take samples of it, not heaping spoonfulls, for best enjoyment.

a bunch of papers from my old site that I need to reprint or read

(note: I will be removing these as I read them.) update: I’ve decided not to remove some, as they as “classics” and I’d like to keep the link for my future possible reference

This GIAC practical paper is a massive look at the firewall stance of a fictitious company’s complicated network. Very detailed paper and I really look forward to reading it someday soon.

A paper on discovering wireless discovery tools like Stumbler.

A paper on detecting wireless lan mac spoofing. A bit dated, but still a nice little bit of knowledge to have when looking into wireless forensics and traffic.

A fictional Red Team Assessment paper. This paper is a practical for a GIAC certification. Interestingly enough, it is actually a response/engagement to a previous GIAC practical paper submitted by another certifyee.

A short paper from Joatblog on fingerprinting, but also contains a nice list of resource links at the bottom.

And this is why you block ICMP (or at least monitor it closely): ICMP tunneling. This is a vein of project I’ve been wanting to do for some time now, along with an SSH tunnel that I can set up from anywhere and use things like an wireless hotspot and still maintain a good measure of privacy.

A paper on how to install a secure Linux web/mail/dns server. Requires .pdf viewer.

Part 1 of a series of papers on Linux Security. Tons of links to other resources at the bottom.

NSA’s 60-Minute Network Security Guide. A nice little overview type of read that covers as much as some network security books cover. Nice little inspiration and start to getting into a mindset.

An article on understanding tcp reset attacks. Have yet to read this one.

Univeristy of Washington course on modern cryptography has been placed online. Might be some good material to read on a rainy day.

2006: the year the blanket of ignorance started sliding off

One author has dubbed 2006 the year of the breach. I disagree. I think this year is the year when the blanket of ignorance has started sliding off. We’ve not had more data disclosures or identity thefts. We’ve just heard about them more than in previous years. Laptops have always been lost and data has always been on them that should either not have been or at least encrypted. This is not new. But our talking about it in mainstream circles and media is new, especially in light of erected regulations forcing such disclosures.

In addition, drivers, particular wireless ones were outed throughout the year, and all those quiet little problems with their code quality have come to light in quite dramatic fashion. This is still a fairly quiet problem, however, probably because unless you’re installing a new system or a gamer, no one really regularly updates drivers. People still want to just ignore the problem.

Web 2.0 started getting beaten around a bit as application developers are still pounding out insecure code, but several researchers showed us that this is all deeper than we thought. Javascript and HTML are capable of very similar attacks and recon exploits. We all feel a bit less safe on the web as a whole. The Month of Kernel Bugs has opened eyes to kernel issues, full disclosure, and software patching processes in open and closed source projects.

While few of these issues are truly new, and nearly as many are still not really solved, at least we’re talking about them in public and they are getting attention. We can no longer live with self-inflicted ignorance in management who would rather not think about a lost laptop and be even less inclined to admit to anyone that one was lost when it does happen.

see ya open relay database!

The Open Relay Database service has called it quits finally. ORDB provided a blacklist of known and/or suspected spamming SMTP services based largely on IP addresses.

This was always a bad idea. I dislike lame workarounds for a problem inherent in the protocol itself: lack of authentication. Trying to tack on security just won’t work here. You might be able to shun a large swath of spam, but you also catch a lot of dolphins in the net as well. Take me for instance. My home mail server is on a DSL or cable line. The ORDB labeled my connection as a home-based system or even “dynamic IP” and thus anyone using their blacklist dropped any email I sent. Most companies that used this blacklist also did not accept free mail services like Gmail and Hushmail. It truly made communicating with some companies extremely problematic. I never did get a response from ORDB about my reservations (to put it lightly). You can drop 100,000 spam messages and no one will care. If you drop 1 extremely important email from a VP, heads roll. This does affect most any spam protections, but shunning by IP is not the solution.

Likewise, I’ve heard tales of legitimate companies being placed onto the blacklist, and having a huge hell of a time trying to get off the list. There is no real definitive threshhold or line drawn where, when complaints cross it, the site is put on the blacklist. This means that the larger the institution, the more likely a few clueless people will report legitimate mail that they requested, as spam, and screw up the company. Not a cool model.

So, rather than just complain, what do I recommend? Honestly, I’m not sure. There must be signature-based detections, but that relies on someone keeping the signatures updated (outside service). This should be accompanied by automatic denial of certain types of emails, such as emails with .com attachments and so on. There should be some measure of bayesian/subjective analysis, but that can’t be terribly draconian otherwise legitimate emails will be dropped. When it comes to my home network, I’d rather delete a few rogue emails than lose a few mis-categorized emails. I also believe in layered defenses, so this network-based detection can be augmented by utilizing any client-side “junk” filters. Most email programs today include some sort of manually-configurable junk filter that can “learn” as you use it. Utilize that for anything that gets through the initial procedures.

The rules change a bit when you talk about corporate email systems, however. No one wants their users to get even any spam mail, let alone something offensive or not appropriate. In a corporation, I really believe either the company needs to accept some measure of spam (typically smaller companies with less budgets, who also may be more needing to see emails from servers like mine) or spend the money to fully outsource it to a professional spam blocker. For comprehensive and intelligent and highly accurate spam blocking, I feel no company can do this alone. We use Postini at work, and I have to say I’ve been quite happy with it. Basically get a service upstream, filter emails, and then receive only the good stuff. This helps take pressure of corporate IT to become spam experts 24/7. That’s just not practical.

Ultimately, I’ll have more opinion on this after I play with SpamAssassin some more. I really do believe SMTP is a good protocol, but the Internet has grown larger and more depended-upon than SMTP was designed for. I consider it an already-dead technology that will linger for many, many more years simply because of the low cost and ease of usage. It will eventually be replaced with voice services or SMS and messaging services. The only effective difference between email and IM is the ability for mail to be held on the server until the user logs in and retrieves it. Yahoo does this in IM and has done it for years, and Google continues to make Gmail and GoogleTalk features more and more overlapped to achieve that switchover.

not only have criminals matured, but so have security pros!

Ten years ago, it was still common to refer to hacking groups by their creative and rather dark names like Cult of the Dead Cow, and handles like Master of Disaster. These days, hacker criminals (note the use of the adjective “criminals” to quality an otherwise non-negative “hacker” noun) have matured their practices from just being curious and annoying and destructive to being profitable. But so have the security pros. While there are still people like Major Malfunction and Phenoelit around (and many, many others), just look at my list of links to the right, especially the blogs. I now have more real life names than I would ever have had 10 years ago.

That’s not to say hackers do not have witty handles anymore, but there is that maturing going on in all facets of this industry. Curious, if nothing else. Me? I like the ability to use a handle to protect my identity online just a little bit more. Games, forums, IRC, IM…everything still asks for a unique username, so may as well blend that in with my industry handle. Better than being Avengerr26078 or Neo643389x!

death by a thousand cuts…the details will kill us

UCLA just announced the disclosure of private data on 800,000 persons. I find it disturbing that the “attacks occurred between October 2005 and November 2006.” I almost suspect that that is only as far back as backup tapes and/or logs go. And there were multiple attacks? I would be willing to put money down on the detection being accidental on the part of the network admins. Maybe someone just looking at something they normally don’t, or seeing something odd when troubleshooting an extraneous error as opposed to an IDS barking alerts or alarms going off or the attacker(s) being noisy.

Information security and insecurity isn’t going away, and it is very hard to ultimately protect juicy targets. IT is understaffed and underbudgeted. We complained about this 8 years ago and we still complain about it because technology and information have grown along with staffs.

We also have an inability to share information. We work in an industry that cannot disclose details without the very real fear of lawsuits. But we desparately need to share this. We need to share what broke down in UCLAs detection strategies. We need to share how they learned of the incident and investigated it. We need to share the goods and the bads, what works and what doesn’t work, the internal political barriers and the champions who push through them. Otherwise this issue just cannot go away and we’ll only have analysts and journalists telling us (and our management) what “should” be done with absolutely no regard to the feasiibility of those measures. (Of note, I love analysts/journalists telling companies how easy it is to encrypt full hard drives just because they were able to encrypt their own hard drive once, two weeks ago, and then didn’t like it and removed it…)

If we are to start making headway, we need the details. Otherwise the details, in their silence, will kill prevention.

how much longer will open source last?

Open source software is considered by many to be the untainted version of freeware available on the web. Far too often, “freeware” packages in other smaller programs, from announced installs like Google or Yahoo toolbars to unannounced installs like spyware and adware. Open source is a much, much more trusted “standard” for web surfers to download and install programs while sleeping easier at night.

But I wonder how long such trust will last. I download and install open source apps regularly, and in fact, unless I know the application well I don’t install a closed source app when open source has alternatives. But do I look at the source code to check and make sure some spyware app isn’t packaged inside it? How many other people compile the source themselves, let alone truly understand the code enough to feel safe? And if someone with programming knowledge does this, will he be able to let the rest of us know and “out” the application?

Right now we (I) have blind trust in something deemed open source, and maybe a little more trust in something open source available on SourceForge or through a package manager, but there will someday come a time when even open source is not safe from the little things installed by determined marketers. What if an application is only really “safe” if manually compiled from source, but the compiled binary version has small print in the EULA hiting at additional software…?

hoping ISPs are not going to tackle the botnets and zombies

There is more and more talk of people (typically people that just talk about things, i.e. analysts, as opposed to people who really *do* anything) wanting the ISPs to take up the battle against botnets and zombies. Personally, I feel that if ISPs are going to be forced into taking care of things closer to the end-user or that affect the end-user (either through detection and/or shunning after a threshhold), they’re going to go balls-out and go farther than I, as a consumer, want them to go.

It is already difficult enough to shop around for an ISP that gives me a static IP (or at least very low turnover dynamic), allows me unfettered incoming and outgoing ports, and allows me to use my own mail and DNS servers as I see fit. I don’t want that crap done for me. And I don’t want to pay for business-class service. But if I were an ISP forced to go this route, forced to tackle a layer in the communications that I wasn’t really supposed to tackle (this is like asking the physical layer to protect the sessions), I would make damn sure I log everything I can and get as far as I can and as thorough as I can before consumers start decying privacy issues and freedom of service. This is a ball I do not want to have started rolling.

Besides, I don’t really think ISPs are going to dent that particular problem right now. I’d rather they were left to focus on what they do best, and provide me with uptime, reliability, and faster circuits. I don’t want to have my system shunned (loss of reliability) because one of my neighbors can’t stop visiting infested porn pages or out of the blue if it is my system affected.

But yes, I do think security will still head towards the switch, only the switch will be inside corporations and inside the user home.

on security workarounds and knowledge

I am often amazed at some of the solutions to security problems that some organizations and people implement. A mailing list situation recently came through that had a web-based system developed to “hide” the URL bar from users so they couldn’t see and/or manipulate the URL. This is almost certainly to obfuscate sensitive data in the URL and possibly avoid risk from manipulating that data (the classic www.domain.com?price=199 variable which can be changed to change the actual price). Now IE7 is out which forces the URL to be displayed. Kind of defeats some of that purpose, no?

Other times there can be some very creative ways to deal with security issues. SMTP “security” can be achieved by capturing emails with “SSN” in the body and saving them on the mail server for pickup by the recipient party. This really does not fix anything in SMTP or email, but rather just changes the path of the missive. Sadly, this is usually pretty annoying from the recipient’s point of view.

These are sometimes just patches and workarounds to the real, deep issues of security. In the first example, the app should have been rewritten to display a sanitized URL. In the second, figure out a better way to utilize email or try to re-invent SMTP (hard sell, that).

I’ve found that there is an endless supply of creative and work-around ideas in the field of security, and I think a large part of that is a function of the skill in the field. As more and more auditors (people who check lists…), non-geeks, and barely competent IT support persons move into this field, the talent and skill gets a little bit more and more watered down. Instead of understanding the nuances and/or realities of a tool, too often shallow knowledge gives way to sometimes ill-conceived workarounds and obfuscations of issues.

It truly does take a technical and deeper knowledge to effectively and quickly determine security responses and measures (or how to beat them). Someone cannot take a position to secure DNS without understanding how DNS works. Likewise, how do you secure applications that depend on DNS when you don’t even know DNS itself?

Web applications are teeming with this issue. If a developer knows how to program security into the product on the fly and codes with security in mind, that is a huge benefit to the developer who only knows how to make the functionality work (sometimes in equally ill-conceived ways), but then has to spend tons of time trying to boly on security down the road. Knowledge would save time and money.

I think this is where a lot of bad security comes from, just a simple lack of expert level knowledge. This itself is tough to achieve anyway, as a security guru tends to be seen as a cost, not a value-add. They add value by also doing network/systems administration, which tends to trump security when push comes to shove.

And while budgets, poor management, poor decisions, and other things influence one’s ability to be educated and/or implement solid security endeavors, I still think being an expert in the basics goes a long ways. Why implement an expensive NAC solution when you can drop in an old box running arpalert (free) and check for rogue machines that way? Why spend hundreds of manhours on limiting exposure of an application on the network when you can ensure your code can withstand fuzzing attacks?

This isn’t the only reason we have insecurity, obviously. There are time issues and often pressures from outside the competent developer’s control. And there is much to be said about defense in depth by doing everything one can to make a more secure product, but I still believe the basics are what comes first. The obfuscation needs to come after. The creative workarounds that could be obsolete next year need to be second.

The future is still going to remain with open source tools and creative ways of being an expert with the basics. Not on spendy and fancy workarounds that too often miss the real points of insecurity or create insecurity itself. Besides, even something as epidemic as XSS is not a difficult issue to either exploit (usually) or prevent. This is basic stuff that we’re still struggling with.

(On a flip side, I find it equally as bad to be both complex and an expert in it, as that means only you have the knowledge to make things work…complexity begets complexity begets less security…)

decrypting wireless packets

I made a few discoveries this weekend. First, a wireless access point has popped up in my neighborhood recently that is not encrypted, as a quick test of Netstumbler showed me. Second, my newest used laptop appears to be equipped with an Atheros card. Oh joy! I might just have to dual-boot that guy into Linux!

I hopped on the wireless network to poke around, but the Netgear AD password had been changed, and the one other system on the network was sending very few packets across. In fact, all the packets I picked up, with few exceptions, were not being decoded by Wireshark properly. They keep coming up as a Belkin MAC and something about broken packets. I’m wondering if this is something like a Netgear/Belkin combination using proprietary “speed-boosting” which is mucking up the packets. I fired up the newest Cain as well, just in case something interesting flew by.

I’m not really sure since I’ve not seen it before, but I’ve left the laptop on the network and will check it out over the next week or two. I do have an Internet connection through it. Windows Network Neighborhood gave me the computer name which happens to be a girl’s name, and the AP SSID was a last name. Tonight I need to check what IP I have so I can get the service provider and IP to do some external testing, although I suspect I won’t find anything useful. Given some Google searches and any possible traffic that I can decrypt, that is quite a bit of information to leak already.

At any rate, it is fun to have a spare system that I can just dedicate to wireless stuff. I’ve been wondering what to do with the system, as it is a little too big to properly carry anywhere (about 10 lbs and only fits in my backpack) for real portability, especially since I have far lighter systems. But now I think I have at least one use for it as a wireless workhorse.

are we there yet

I’ve seen a few “wide scope” posts lately about the state of security, but this one has some of the best points in it, and presents them very well. Mostly I just want to save this for my own use in the future.
Just one comment on it. Items 14 and 15 talk about how we cannot seem to agree, as a field, on best practices. Those posts are illustrated in item 2 on disclosure practices. Many of us understand both sides of the equation and even the grey area in between, but yet we still fall on all sides of the debate. Sometimes there is really no universally correct answer…especially in such a complex field as IT and security.

sysadmin of the year

The first Syadmin of the Year awards have just been announced. While half the stuff said is likely embellished and it is just a little pat-on-the-back kind of site, I just thought it was interesting to see what these guys did that made their peers and co-workers nominate them.
I also would like to note that not one of them (with the Air Force exception) is wearing a tie to make them work better. Nor do any of them work for recognizable large-type corporations. These guys just plain “get things done” as opposed to running the gamut of business politics. And I would be willing to bet that every single one of these guys actually truly loves their job and company. Happiness == productive == successful.