re: management by fact

I had a post a few days ago about managing by fact, to which Alex responded rather appropriately by saying “fact” is a bit of a strong and strict word. We can manage by belief, but our beliefs need to be backed by observable evidence, reason, and facts (yes, I’m rewording). He’s right and I have a belief that we both agree on this topic quite nicely. 🙂

feisty ubuntu server tutorial

Adnan posted about a Rootprompt post pointing to this Ubuntu server installation tutorial on Feisty Fawn. The tutorial is aimed at installing services that an ISP would need: SSH, BIND, MySQL, SMTP-AUTH/TLS, Courier-IMAP/POP3, Apache/PHP5, ProFTPD, ISPConfig. Not necessarily stuff I all need, but some I do like to read up on how other people do these.

I like this tutorial and I don’t like this tutorial. For starters, the tutorial is one of those things that says, “To install XYZ, run this command and move on.” It really offers little ability to deeply understand what you’re doing and what nuances your particular needs or security posture might dictate. When you install the SSH server, did you disallow remote root login? When you’re done with this tutorial, do you set su/sudo behavior back to the default? Does MySQL or Apache run on its own account and can those accounts be logged into via SSH? The tutorial is great as an example of how easy it can be to install these services, but does nothing to warn users about the level of care and attention might be needed to make sure it is running securely and efficiently. Did you follow this tut and leave your balls out on the Internet to be tickled and kicked or did you slip a cup on when no one was looking?

However, I do like tuts like this where sometimes the service you want to install seems daunting for no real reason other than fear of the unknown. I’ve worked with BIND in the past and can edit my own zone files, but for some reason I have never actually stood up a BIND DNS server myself. Tuts like this can blitz you through the unknown and get you going. You can’t learn to whitewater raft by watching from the bluffs. Get the hell in the water, capsize yourself, and get wet!

high-end insecurity: RFID and LCD

Looks like you can recreate images on LCD screens remotely. I’m not sure how it works with moving images, but this is pretty high-end if you ask me. It is interesting to hear that NATO spent a lot of money to protect against a similar attack against CRTs. And also RFIDs are still being talked about for their flaws and the paranoia behind them.

One of my big things is how our security, laws, and entire culture have changed due to how efficient the digital world has become. Music has always been pirated, only now it can be done on massive scales. In the past, things like RFID and LCD eavesdropping were really only issues for extremely high-end governments and corporations. No one else cared, had threats that had these capabilities, had the assets valuable enough to protect to justify the cost, nor had the money to afford it anyway. We’re talking huge companies, governments, and military, and even just subsets of those.

But these days, things like this can become a reality for more people. RFID might be something we have in all our pets soon, cars, electronics, maybe even ourselves. LCD eavesdropping is still a bit exotic, but if it really is as easy as it seems, this could become a backroom concern for corporate espionage or even internal investigations. Can you imagine being assigned the task of sitting in a conference room and recording images on the screen of a VP two offices away as part of an internal investigation in addition to network and disk forensics? Could you maybe drop a magnetized object on the back of the monitor which automatically logs all the images much like a keylogger? What about the potential range of such eavesdropping? Can it be thwarted fully by focusing on the physical security angle or will LCDs be obsolete in 7 years just like CRTs are now, thus the vulnerability will slowly ebb away?

Some interesting thoughts…

continuing my education finally

I have finally begun the road of post-college continuing education (way behind schedule!). Today I passed what I consider my warm-up certification: Security+. Go me!

I was surprised by some of the questions on the exam, for instance what protocol does the ESP portion of IPSec run over? I had no idea (heck, I don’t think I really knew what they meant by that!). Interestingly, Wikipedia knows! I think if I have any advice on this test, look up the objectives not just in books but also Wikipedia.

Some other questions I see as rather tough for someone who has been in IT a while. “What is the first thing to do in XYZ?” You can easily overthink some of the questions and/or argue the subjectivity of some of the answers. There was another rather technical question that I wish I had the answer to (or even how to look it up!). If an unauthorized user got hold of a Linux /etc/passwd file, what would likely be the cause? SSH 0.9.4 (I might have that # wrong) installed and configured; Sendmail set up with access to administrator’s web mail; SSL something using the Apache account without virtual hosts defined; FTP server with anonymous access configured. I was like, “huh?” I could maybe pop SSH if that version is vulnerable to something, maybe that sendmail answer is referring to being able to remote in as root, maybe that Apache account has root level permissions, or maybe that FTP server somehow allows access to the otherwise normally protected /etc/passwd location? I think I answered the SSH one…no clue if that was correct.

I’m pretty sure the exam is taken from a pool of questions so I don’t see them all, but I was surprised by the number of MAC (Mandatory Access Control) questions I had (at least 5!), some of which were almost word-for-word like others. Anyway, I don’t want to go over too many questions from the exam, but suffice to say it is a nice mix of technical and conceptual questions dealing with security.

Coming up:

stop ruining it for the rest of us!

If stories like this keep appearing, IT is going to continue to become much more complicated…

Denison first attempted a remote attack against the ISO data centre on Sunday, but this was unsuccessful. He then reverted to simpler means, and entered the facility physically using his security card key late on Sunday night. Once inside, he smashed the glass plate covering an emergency power cut-off, shutting down much of the data centre through the early hours of Monday morning. This denied ISO access to the energy trading market, but didn’t affect the transmission grid directly. Nor did his emailed bomb threat, delivered later on Monday, though it did lead to the ISO offices being evacuated and control passed to a different facility.

what I learned a few weeks ago: http request smuggling

Recently I saw an HTTP Request Smuggling alert fly past my IPS. It turned out to be a false positive, but led me down the path of figuring out what that attack actually was. This was one of the bigger things I learned that week. Coincidentally, almost that same day, I browsed backlog quiz questions from Palisade and came across one about HTTP Request Smuggling. Whoa!

HTTP Request Smuggling is scary for a few reasons.

First, and likely the biggest reason many people don’t hear about it, is it is pretty complicated and technical. Do you know the differences in how your application level packet intepreters (cache proxies, firewall proxies…) and your web servers parse HTTP? Me either. But some people do, and I bet they can pilfer some scary stuff without many people knowing..

Second, you can poison proxy caches, pilfer credentials, and leverage other vulnerabilities like XSS using HTTP Request Smuggling without ever really needing to touch the client or have them do anything. The client really has zero ability to stop this attack (returned javascript notwithstanding).

Third, it sounds difficult to detect in logs and on the wire since the packet parsing needs to be done with awareness of what web server and proxy server is in the communication line are, and how they parse HTTP.

Palisade has a nice write-up on the issue available on both their quiz question and also their article. WatchFire has an amazing white paper on the issue that you can sign up to get (use Pookmail as your throwaway email address).

more linux basics – the sleep timer

I dig somafm, particularly the Groove Salad station. Sometimes I get into a nice chilled state of mind at night and would love to fall asleep to some cool grooves, but don’t want XMMS (my mp3 player) to run all night long. Well, I can do this easily in a terminal shell by first finding the pid of XMMS and then using the sleep command. Elegance in simplicity.

michael@orion:~$ ps ax | grep xmms
29540 ?        SLl    0:20 /usr/bin/xmms /tmp/groovesalad.pls
30511 pts/0    R+     0:00 grep xmms
michael@orion:~$ sleep 1200; kill 29540

the backlog it taunts me

Man, it is amazing the backlog of things to play with and check out that an IT geek can accumulate. Having not had too much time lately, I’ve gotten a 6 month backlog of about 200 little notes to myself to check this site out or that blog out, check this tool our of that tutorial. Crazy! If I happen to start posting a bunch of stuff here, don’t yell at me. I used to use my blog as my notes place on new tools and things, and sometimes I’d post about something for my own benefit but never really ever get around to playing with it. I hate it, but that’s the way of keeping up with technology!

Scope! I need scope! Perhaps a job change that reduces my scope of responsibility might be helpful? I could just get a job where I create Exchange email accounts all day. 🙂 Yikes!

remoting into headless ubuntu box

Yeah, I know, back to basics with Ubuntu. This took me longer than it ever should have, so I’m just posting my travails here. I wanted to make my Ubuntu server essentially headless where I don’t have a keyboard, mouse, or monitor hooked up to it. Obviously this means remote desktop capabilities.

Sadly, the obvious and most often-used tools to accomplish this either require me to remote logon with my Ubuntu laptop (yuck!) or require a session to already be logged on the server locally (yuck!). Well, I want to be able to remote in, even at the logon window after a reboot! Here are my steps.

sudo apt-get x11vnc vnc-common
sudo x11vnc -storepassword password /etc/x11vnc.pass
sudo gedit /etc/X11/gdm/Init/Default
add this at the bottom just above exit 0:
/usr/bin/x11vnc -rfbauth /etc/x11vnc.pass -o /tmp/x11vnc.log -forever -bg -rfbport 5900
sudo gedit /etc/X11/gdm/gdm.conf
change #KillInitClients=true to KillInitClients=false

I’ll probably end up changing this all up once I decide to wrap this inside SSH, but since this will always be local (unless I VPN in remotely), I’m not as concerned about this setup. I might just tunnel it through SSH just to make sure I can do so with this setup.

striving towards management by fact

Richard’s post about monitoring and “management by fact” got me thinking about security for the real world admin. What is the best sort of server to monitor? That’s easy, the server that requires the least changes. If you stand up a server and don’t need to do anything beyond patches and application-level updates (for a DNS server, adding DNS records…), monitoring that box becomes amazingly easy and informative.

You can quickly tell when something is wrong. Besides, typically in troubleshooting (and it is part of Cisco’s troubleshooting methodology) is to ask pretty early on, “What changed?” This is something really near and dear to my heart, since I used to be pretty heavy into sciences back in college: observable changes causing observable results. If something weird happens, figure out what the one-off is that caused it.

There are really two problems in business that fight a never-ending battle against the unchanging server.

First, the technical ability of the admin is crucial. Take a new DNS admin tasked with standing up a DNS server. It might not take long to get the DNS server up and running, but to get it tuned for performance and security may take weeks, months, even years of small changes, mistakes, and troubleshooting. For an expert, experienced DNS admin, this “time to stable” is far shorter and much more ensured. This is partly why we need more experts (training) in the back rooms of IT, the luxury of making mistakes to become experts, and time to do proper research so we can be empowered to do more initiatives outside of our comfort zones (otherwise we just say, “no”).

Second, business sometimes likes to cut corners, especially with money and especially with IT infrastructure. If a server isn’t choking, it must have room to put more on it, right? This defeats trying to efficiently “manage by fact” in the IT back rooms. If you have an SBS box that does basically everything that can be crammed into it, the constant flux of use and changes can make creating a baseline and monitoring for oddities frustrating.

I love the idea of managing by fact, and I think for the most part of security, that should be the goal to someday reach.

keystroke biometrics

Keystroke mechanics keep being talked about as a form of biometric identification. I’m still skeptical because of how variable this can be…

I live in Iowa which means we have some pretty cold winters. I certainly do type differently if I have cold fingers.

I also type vastly differently depending on my level of inebriation (of course, this can cause regular typos in passwords anyway…)

I type differently depending on my position and mood and keyboard and life. I type far differently now than I did 5 years ago, for instance. Sometimes I am in thought and might type differently, especially on some sort of password screen.

Do I think people type in differents ways enough to be able to tell who it is with an acceptable level of accuracy? Personally, I doubt it…

naming workstations

I just read Naming Workstations on a Windows Network and had to smile a bit. Something as simple as your workstation naming scheme can be a very complex process that is different for every single network from 10 users to 10,000. It just goes to show how varied our field is and how many different ways and opinions there can be.

My current job names workstations by OS and username. I dislike this scheme. At my old job early on I inherited and used a similar method where I named the workstations after the usernames. We had a smaller company of only about 60 users, and by the time we grew up to 150, we had had a security audit which pointed out that machines named in such a way leaked too much information (Low priority, I believe). Wanted to target the CFO? Find his name, enumerate the network, and you likely also have a username that has rights on that machine.

I switched us over to naming machines “wkst###” and maintained both an Excel spreadsheet mapping workstation name to the user assigned that computer (we checked out equipment to all employees) and also inventory management software which let me regularly map MAC, IP, usernames, and workstation names together. This way if “WKST125” was doing something naughty, I could very quickly isolate it, take control, and/or check on the user. Having administrative access on switches and remote control capabilities takes away a lot of the need for user-named or even departmental-named workstations when you have an inventory of MACs and domain admin rights! I never did reuse names either, and I had a strict personal policy that no machine was re-issued without first wiping and re-imaging it (sadly, some colleagues did not adhere to such policy later on), thus a perfect opportunity to rename it. I might leave orphaned entries and artifacts this way, but I would rather have orphaned data than data that might actively be lying to me if it wasn’t kept up to date.

we have to make mistakes

Security and IT are tough these days. While we keep getting an influx of people with their MCSE and A+ certs that can do fun things with desktop support, it is all those other more specific areas of IT that still are not getting the love they should be getting. Maybe it is because they’re a layer or two out of the eyes of most normal users (and managers). Too often, us techs can do a lot of good things, but sometimes don’t get a chance to try things out when we’re already swamped with an overload of work, not enough money, and too many fires to put out.

Mark Curphey has been posting his experiences with his new start-up lately. While a lot of the content is not terribly pertinent to me at this point, I do enjoy reading him. Tech-to-tech, this paragraph really caught my eye:

Did I really transfer the domain to my account or was this someone snarfing my domain and my religious spam rules means I missed a very important mail? Alex was sat at his desk dreaming in code but saw I was panicking. We look at it and pulled up the whois records. Holy bull-shitake batman, some bastardo has snarfed my domain and the records show dummy, dummy, dummy as the new owner. We googled and others had been conned by the same trick. How could this happen? How could Gandi let someone transfer a domain without positive acknowledgement. Oh cricky, I really screwed up by being strict on spam.

Considering the theme of this post, I think it might be obvious what caught my attention. You can make an entire job out of being a spam admin or even a DNS/SSL/domain admin, even at smaller companies. But chances are, those tasks are only a very small part (a disturbingly tiny) part of our jobs. How can you get to be a spam surgeon? Do you have time to pick through what gets caught in the filters? Do you have time to even tune up the filters at all while maintaining high functionality for possibly critical emails? Just how are you tracking all your DNS and SSL purchases and expirations?

That’s tough, and I think unless you can acquire these skills somewhere or have a job that lets you have a lot of bandwidth to research and tinker with such things, outsourcing to a company that can focus on just that one thing is still a big IT need. That or understanding what techs need to ultimately be successful. Can you really maintain a spam filter effectively, or would it be more efficient to outsource to a company that specializes in spam filtering?

That is one area I think still needs work in the “business and IT must work better together” agenda. We don’t know everything in IT and we really do have to make mistakes. I’ve learned that you learn the most about technology during the troubleshooting stage as opposed to when everything is going right. Business is not terribly forgiving about such things, even if they are small but visible incidents in the whole scheme of things. Business wants to make a request, have it implemented perfectly, and then run unattended for 25 years without any further investment. IT knows better and that any new technology not only must be learned, monitored, and administered, but at some point does need to be evaluated for security, efficiency, and proper improvement.


Played briefly with OpenWRT this weekend. I have an extra Linksys WRT54G (v2.2) WAP and I loaded up the appropriate OpenWRT firmware. OpenWRT unexpectedly imported all my previous settings from the Linksys default firmware, so I didn’t really have to do much besides plug in cables.

It should be noted that while Linksys products are administered by the web interface, OpenWRT’s web interface is really only useful to see some status information, set very general settings, and view the list of installed and available packages. Everything else should be done via an SSH connection. Set the login password in the web interface while there. This not only sets the web interface password, but also turns off telnet and enables ssh. Remember that you are essentially SSHing into a Linux box, so you SSH as root (ssh root@ Hopefully through the week I’ll look into playing with this box a bit more.