new book releases

A couple new books have been spotted in the wild! I ordered Security Power Tools last week from Bookpool. It should be arriving today or tomorrow! Yeah, I have books dealing with open source security tools like Nessus and the like. But I like the hands-on practical look that this book appears to include. I thumbed through it at the store this weekend and was surprised at the detail and also how thick the book is! I’m not sure I’ve seen an O’Reilly book this thick before. I love books like these, since I do believe someday the back will be broken in IT + security where spending more money on ever-spendy tools and appliances will not be an option, but we’ll still need to Get Shit Done. Open source and other freer tools are still going to be the future reality for most of us. Books like Special Ops, Hack Attacks, and Hack I.T. are mere blitzes compared to more indept information on using tools.

I also want to find and pick up Metasploit Toolkit for Penetration Testing, Exploit Development, and Vulnerability Research. This will be kickstarting my dive into Metasploit 3, since I’ve put off that transition for too long now, and is a book I’ve been anticipating for some time. I checked around yesterday, but the local stores didn’t have copies available. Since Bookpool is susprisingly not discounting this, I may as well pick it up at the store!

security even a caveman can break

I saw via Bejtlich that InformationWeek has an excellent article up about Robert Moore, the hacker who, a few years ago, broke into quite a few telecom (and likely other) organizations to route and steal VOIP.

The article continues to pound home that we’re doing the simple things very badly. And we have no friggin’ clue when someone malicious is doing things inside our network. Here’s some meat, though:

“It’s a huge problem, but it’s a problem the IT industry has known about for at least two decades and we haven’t made much progress in fixing it,” said van Wyk. “People focus on functionality when they’re setting up a system. Does the thing work? Yes. Fine, move on. They don’t spend the time doing the housework and cleaning things up.”

That’s really a huge part of the problem, isn’t it? Implement VOIP, and hope that you get time to get back to it later to evaluate the security before your next big projects come up. And so on.

Really, I feel that this problem is twofold. First, we’re still maturing in our grasp of technology. Unfortunately, and *naturally,* the attackers are maturing faster. This happens in biology as well, so we need to accept and expect it as a given. Second, having the time and resources to either do the job correct up front or revisit the job later and fix it up.

thoughts on cyberinsurance

Bejtlich slammed out a bunch of posts late last week which I’m still wading through. Excellent food for thought for a whole week or more! I just wanted to jot a few thoughts of my own down, fairly unformulated ideas…

Cyberinsurance. It really does make sense on paper, no? And it’s one of those things we look towards like the sun peeking from the clouds in the distance as we’re still getting poured down upon; there is an end!

Sadly, it’s not a perfect solution. IT is spendy. Unlike fire insurance measures which may just include inheriting whatever the builders built plus marking exits with placards and posting occassional fire extinguishers, we inherit insecure building blocks and have to do a hell of a lot more to detect and monitor while also providing IT services to the business. That’s a very different magnitude.

Fires happen, but not very often. Cyber attacks may not happen to your business very often either, if at all, but they certainly seem to occur on smaller scales very often. Viruses, worms, snarfed credentials, file loss through P2P. While this isn’t like a fire that destroys a building, IT security is more like lots of little fires that can pop up every week in various corners.

Likewise, what if it were profitable for people to set fires to your building? And they could set fires without being physically present? And have little chance of being caught unless the fire gets way too big? I think we’d see lots of fires and fire insurance would have some pretty deep questions to start asking itself.

When a fire occurs, there are professionals trained to examine and determine fire causes. These causes, with extremely exotic exceptions, should be fairly finite and predictable based on the operations that take place in that building. Negligence can be supported with building specifications, local and federal laws and standards, and inspections based on specifics. IT is far more wide in the spectrum of choices, tools, implementations, and so on. There are best practices for things like a Windows shop, but relatively few people know them fully or pursue certs that would help solidify them.

Maybe cyberinsurance will be a way to show compliance? For instance, you do measures X, Y, Z, and part of G, and you won’t have to pay all that much more in premiums. Of course, how much do those measures cost compared to the savings? Taking this a step further, how is this very different from the much-maligned “HackerSafe” logo on websites? As an industry (and the media, and thus average people, and thus culture), we’re very intolerant of single failures. This might be because single failures can affect millions of people in ways we probably don’t even know about yet. Or it might be because it’s all so very dramatic yet… Laptop theft has existing since there have been laptops, but it seems like more now because of the disclosures requirements…

Insurance also seems to be something people buy to protect against things outside their control. Attackers and other digital shenanigans are maybe not so much seen as random or natural acts, but rather things we can control. Why buy cyberinsurance when that money can be spent on the IT/security infrastructure? We still have a lot of ways to become more secure, whereas insurance seems to me to be something you buy when you’re out of alternatives and need a safety net.

Cyberinsurance sure sounds good, but I wonder if our current state is going to upheave such an insurance model in the same fashion that technology is unheaving our idea of copyright and privacy.

Anyway, just some thoughts for me for the future, nothing solid or much that I’d back in a challenging discussion, yet. 🙂

installing pidgin 2.2.0 on ubuntu 7.04 to use google talk

I recently decided I needed to use Google Talk. I don’t know why, but I have Gmail accounts, so why not buddy up to Google Talk? I use Pidgin 2.0.0 on my Ubuntu 7.04 laptop. Unfortunately, I was having no luck getting XMPP (Google Talk) to connect properly. An upgrade to 2.2.0 is in order, right? Unfortunately, nothing exists in the repositories to upgrade Pidgin. Great! When I did the following steps, I did not have to remove my old Pidgin installation, and all settings and buddies were carried up just fine.

First, I need to update my repositories list:

sudo gedit /etc/apt/sources.list

with:

deb http://repository.debuntu.org/ feisty multiverse
deb-src http://repository.debuntu.org/ feisty multiverse

Then run the following commands:

wget http://repository.debuntu.org/GPG-Key-chantra.txt -O- | sudo apt-key add –
sudo apt-get update
sudo apt-get install pidgin
sudo apt-get install pidgin-libnotify

After this, Pidgin can be started from Applications -> Internet -> Pidgin. Once the app has started, I want to connect to Google Talk. Accounts -> Add/Edit -> Add -> Google Talk.

My protocol is XMPP by default. Screen name is my Gmail login. Domain is gmail.com. Resource is left to the Home default. In the Advanced tab, I checked Require SSL/TLS, chose a connect port of 5222, and connect server talk.google.com. I left the Proxy type to Use GNOME Proxy Settings.

References
installing pidgin 2.2.0
connecting to google talk

secutor prime examines desktop compliance checklisting

I currently don’t do much desktop work right now, but it is still nice to see how a system compares to various standards. I’m not sure where I picked this up yesterday, but I got pointed over to a tool, Secutor Prime, which examines a system and compares it to various standards such as the FDCC. The best part of this tool is the feedback. Clicking on any check will give the findings and also the steps needed to pass that particular test. An excellent means to learn more about desktop security, the settings, and what compliance checklists look for.

the security silver bullet syndrome in negative exposure

It’s not often someone hits a pet peeve of mine dealing with security, but I bristled at one just now.

One of my tenets of security is to make sure to not believe there is a silver bullet or security panacea. I think we universally believe that.

But there are insinuations and beliefs that, in a way, are saying there really is a silver bullet. Most of these have to do with saying “Security measure X is not 100% effective, therefore it is useless/inefficient/expendable.”

I’ve seen this with Jericho Forum defenders who say the perimeter is porous now, which must mean the firewall is less efficient, which must mean we’re moving towards no perimeters. “What use is a perimeter defence with holes in it after all?”

Such a statement is analogous to saying, “I expect my security measures to be silver bullets.”

I don’t think I’ve stumbled downhill nearly that violently since breaking my leg sledding one winter…

some logging notes

Cutaway has an excellent interview up with Michael Farnum who talks about his experiences with companies in regards to a number of things, namely logging. Does he see companies logging, are they doing it properly, and so on. Excellent insight into what’s really going on, and not as untrustworthy as a sheet of stats from some vendor with an agenda.

In reflection to the questions and answers, here some of my bullet points when it comes to centralized logging discussions.

1. The IT team needs to see value in the process of logging and reading logs. If they don’t see value, they either won’t do it, won’t do it properly, or have no clue how to leverage it. If they don’t see value and the business sees no value, it just plain won’t get done. This probably always ends up not being a security value-add, but rather an operations one. Something went wrong with a web app, can you troubleshoot it by looking at the logs? Or a server isn’t updating properly from WSUS…and so on. Logging should be seen as important as a heart monitor on a patient in the hospital.

2. Once there is value, or maybe even before the value is realized, admins need the time to properly get things set up. Having enough time to gather Windows event logs and nothing else is going to be a wash. Same with just gathering the logs on half your firewalls. Give the team enough time to properly get things going.

3. Set aside time for the admins to regularly look at logs and maybe even “play” with the logging server. If admins don’t have time or are not allowed to use the logging reporting and querying regularly, they won’t have the familiarity to do it when emergencies or high profile incidents arise. Practice, practice, practice.

4. For the love of whatever, read Anton’s paper(s) about the six mistakes of logging.

My own logging? At home, I don’t do enough. At my last job, we did logging, but didn’t use it enough or probably use it properly. At my current job, we don’t do enough logging at all.

how do you eat your 0day?

There is an interesting discussion this week on the Full Disclosure mailing list about the definition of “0day.” Oddly, what seems like an old term is definitely not a term with an understood and universal definition. It seems to vary widely, dramatically widely. Then again, FD is a fairly argumentative list with some people arguing anything just to argue. Still, it is interesting the lack of clarity in some of our widespread terms.

My take on 0day, which I’ve used ever since I first heard the term many years ago, is pretty much the same as the Wikipedia entry. To me, a 0day is an exploit released before solutions or patches have been diseminated from a vendor. This wouldn’t mean a new strain of a virus exploiting a known vulnerability would be a 0day. But a new worm exploiting a new vulnerability would qualify. A side effect is whether something is a 0day to someone who has seen it, and provided for a workaround, even though they’re not the vendor. To me, 0days are somewhat unstoppable exploits, mitigated by defense in depth / layered defenses.

And don’t even bring up “less than 0day,” as I feel dumber each time I hear that term…

unisys and dhs security debacle

The other day I posted about Unisys and the DHS. After seeing a post from Bejtlich, I see they’re fully wading into it together. Ugh.

While I won’t defend Unisys, I’ll play Devil’s Advocate for just a moment. Was Unisys just providing the systems and process and DHS was meant to actually put things into operation? And I wonder if there were any obstacles imposed by DHS that prevented things like IDS systems being implemented? I know it can be a pain when you’re asked to install ABC onto 45 systems, but half of them keep telling you they’re too busy and to try again next week.

It obviously sounds like Unisys made some really poor decisions, but I’m curious on the extent of them from Unisys and from DHS itself, if any. Thankfully, this is the transparent government and not private companies, so we get to watch the laundry shake violently in the wind.

when terminal/server is reinvented as desktop virtualization

Ever read an article that makes you kinda stop anything else you’re doing as you try to make sense of it? Then read it again, which doesn’t help…then read it in bits and pieces to see if you can make sense of the parts in order to tackle the whole? And then maybe still wonder what sort of crack the author is on? I had that this morning reading an eWeek article, Analysts Predict Death of Traditional Network Security. I guess there’s a reason I didn’t re-up to eWeek a few years ago. And it is just coincidence that the topic is de-perimeterization and mentions the Jericho Forum, I swear!

According to them, in the next five years the Internet will be the primary connectivity method for businesses, replacing their private network infrastructure as the number of mobile workers, contractors and other third-party users continues to grow.

…So the Internet is not already a primary connectivity method? I guess I underestimate the Frame Relay and dedicated links market dramatically!

One of the end results of the death of traditional network security will be a growth in desktop virtualization, Whiteley said.

Hey, that’s kinda cool to read. In fact, we’re right now doing some desktop virtualization for mobile employees, particularly developers offsite. They VPN into our network with a system, then Remote Desktop into a virtual machine on our network upon which they work. Odd…I never once thought of this approach as being part of de-perimeterization or the death of the nebulous “traditional network security.” It’s a way to avoid bandwidth restrictions and data egress.

Desktop virtualization allows a PC’s operating system and applications to execute in a secure area separate from the underlying hardware and software platform. Its security advantages have become a major selling point, as all a virtualized terminal can do is display information; if it is lost or stolen, no corporate data would likely be compromised since it wouldn’t be stored on the local hard drive.

And this is where we finally stop toeing the brakes and actually put some pressure down on the pedal. I don’t think the author was involved in something called terminal/server architecture before, since that’s what he decribed. He did not describe desktop virtualization. Maybe we’re seeing the bastardization of terms…which is unfortunate. There is a point to be made about moving to virtual desktop systems and also moving back to terminal/server setups, but it really has nothing to do with de-perimeterization or the use of the Internet to connect businesses. It has to do with support costs, desktop OS compliance activity, and data security. All of which are vague and ubiquitous enough to “support” pretty much any security theory or initiative. Part of my religion is predicated on you breathing regularly. If you breathe regularly or believe in breathing, then you support my religion. Um, no.

The adoption of PC virtualization would mean companies would no longer have to provision corporate machines to untrusted users, Lambert said. Desktop virtualization simply equals a more secure environment, she said.

Hrm, I don’t follow that reasoning at all. In fact, this is a three-punch combo in confusion. People provision computers to untrusted users? Desktop virtualization means you don’t have to provision anything now? And somehow that makes things all more secure? I’m feeling nauseous…

I think the author and the people quoted in the article (Forrester analysts) need to take a step back and iron out what they mean by desktop virtualization and how that compares to the age-old terminal/server environment, and move forward from there. But some of these conclusions just don’t follow, and the muddiness of the terms and logic makes the article a waste of time.

switch basics: loading up a wiped cat 2950

Holy crap 9600 baud is slow! I’m doing something different in loading a wiped switch, and I thought I would use an xmodem transfer. Go me! Since this is taking so long, I may as well post some switch basics as I go. (To note, my earliest speeds on the Internet were 14.4kbps modems back in high school.) I’ll also go ahead and put on some background music, the excellent Dubnobasswithmyheadman album from Underworld (a favorite!).

I have a completely wiped Cisco Catalyst 2950T switch. Even the flash has been erased (an eraser of love). If you boot it up, it gives an error and stops pretty quickly. A quick “dir flash:” will show nothing. I also have an ios version ready and waiting: c2950-i6k2l2q4-mz.121-22.EA8a.bin. For my console system I have an old Dell Latitude laptop (yeah, it’s one sexy-small laptop!) running a permanent install of BackTrack2.

To get the c2950-i6k2l2q4-mz.121-22.EA8a.bin file to BackTrack2, I decided to also test my tftp server and use tftp to transfer the file. My tftp server is at 192.168.10.108.

tftp 192.168.10.108 -c get c2950-i6k2l2q4-mz.121-22.EA8a.bin

Gosh, that’s easy. Now I need to connect up to the switch by plugging in necessary cables, including the power so that it powers on and loads. I decide to use CuteCom in BackTrack2 as my graphical terminal emulator. I change the baud rate to 9600 and click Open device. I type a few commands to get ready for my transfer.

switch: flash_init
Initializing Flash…
…The flash is already initialized.
switch: load_helper
switch: copy xmodem: flash:c2950-i6k2l2q4-mz.121-22.EA8a.bin
Begin the Xmodem-1k transfer now…

At this point the terminal is waiting for some data. CuteCom has a Send File button at the bottom where I can select the file and start transferring at the blistering 9600 speed! In fact, after writing this, I’m still only up to 15% completed. Ahh the joys of a wiped device that doesn’t even know what an IP address is yet.

i blame you for whatever went wrong to me today

Articles like this one about DHS looking to investigate a government security contractor illustrate some of the crap (normal business activity) that occurs in our industry. I’m not going to presume I know the full story or what was in the original contract or what Unisys’ opinion is, but I think this article illustrates two painful realities.

1. If DHS is attacked and they have someone to blame such as a contractor who should be taking care of things, the blame can and likely will be shifted, rightfully or not. This basically means the “information age” is not just surging along and pulling culture with it, but business culture is requiring information be saved and documented to avoid he-said-she-said crap. So unless Unisys goes the proverbial extra mile in the contract and also documents all deviations or obstacles, and because security will always eventually fail, there will always be a scapegoat. And blaming everyone else for responsibility for things is a hallmark of the 90s and 00s. (All starting with the McDonald’s woman who spilled hot [no shit?!] coffee on herself and successfully sued.)

2. The government is opening up competing bids for the contract. That means we have a major differentiator being cost/price. And we all can guess how the quality of security may follow the line of price. Lowest bid will almost certainly ensure the security is also of lower quality.