Just read here some more unnecessary security terms. “Evil twins” is already better described as a rogue AP. And “wireless phishing” is just lame.
Please, unless the method is brand new, don’t invent more terms for things that already have terms, for all our sakes.
Author: michael
what makes a good it professional?
Locutus has an awesome post about what he feels makes a good IT professional, and I totally agree with him. Here is a quick summary in his presented order:
1) A passion for the work
2) Ability to solve problems and research solutions
3) Ability to solve problems and research solutions with time and organizational pressure
I like his first point the most, as it is what I call the “geek” trait. I’m a computer geek meaning work is also my hobby is also my enjoyment. My tinkering with technology does not stop at 5pm nor start at 8am. It bleeds into every part of my day and life for the most part.
This is the whole reason why I fight to have jobs where I can treat both “lives” as similar as possible. When at home, I don’t wear a tie when ironing out a problem, so wearing one at work takes me out of my normal, and productive, state of comfort. (Not that I truly HATE it or something, it’s just a little thing.) Likewise, I might have days where my productivity would be huge at home compared to at work, or at least huge when I’m happy. And if this is my hobby and what makes me happy, it follows to help me be happy at work so that I can be productive there as well.
Ok, end rant. 🙂 I’m sure I’ll complain about this until I actually have a job that doesn’t require a tie 80%+ of the time…and even then wear one regularly.
RE: small business IT
Andy has an awesome post about the realities of small business IT. IT infrastructure is expensive, let alone trying to implement IT in a secure and scalable and proper way. Also let alone trying to afford staff or consultants to support that IT and security. This puts pressure on individuals, small companies, and even mid-sized business to spend that sort of money or accept risks. This puts more emphasis on lightweight and open source tools. Which puts more emphasis on IT staff with those kinds of skills. Which puts more weight on paying their salaries.
As Andy says, even implementing the most basic things like backups can be difficult and painful.
Ahh, the continuing conundrums of IT security.
more IT journalism
Sometimes I really get something in a bunch over the latest and greatest article that makes IT and IT security sound so easy on paper. I especially dislike reading about things like that from a journalist who may or may not even know how to implement and support the given steps and commentary. While I can’t usually comment on their background and experience, sometimes it is pretty obvious when someone is writing about “good to haves” and “theoretical approaches” and “base-case scenarios.” In reality, most companies will never match those steps.
Today’s victim is an article on the 8 steps to a secure network found on zdnet.com.au.
1. Verify the current connections – Verifying the connections on the firewall is a good exercise, so that you know your common endpoints. Sadly, this works only in small networks that have tight control on installed software and desktops. In a large network, this will change too much to be of too much use. In networks that do not have tight controls, you can have a few instances of Skype that will constantly be running suspicious connections to various places in China, Taiwan, Iceland, Denmark, and so on. Investigating these is just an exercise in wishing for tighter desktop controls. It might be better to look for some common destination ports like 22, 21, or some others that would be suspicious.
2. Look at network traffic statistics – This is a good step, and any network admin should be pulling these stats or at least checking the latest numbers every morning. Sadly, this is usually the realm of a specialized network device or a Linux box doing some traffic analysis, two things beyond the reach of many admins. However, if the aptitude on the team is such to get good numbers, this is an excellent step.
3. Look at your antivirus logs – Centralized logs for host-based antivirus is either something a smaller network would love to have or unnecessary traffic storms on larger networks. Network-based antivirus may be better suited here, or something on a chokepoint like the email servers. Checking for updated signatures should be mandatory, but checking for captured viruses is less interesting. Not only that, but the logs won’t tell you the more important information: what wasn’t caught by the signatures.
4. Read the security logs on your domain servers – Reading Windows event logs, particular security logs, is about as bad a task as I can think of in IT and security. Hopefully anyone who has an interest in Windows security logs will be aggregating these somewhere and alerting when things like logon failures occur. If password policies are configured to properly lock out after 5 attempts and require admin intervention to unlock, this becomes moreorless a waste of time.
5. Check for new security patches – As much as I might take exception to most of these steps, I do like this one. Keep an inventory of important systems and software and do regular rounds of checks on security updates. This doesn’t need to happen every day, however. And hopefully you are controlling and know what is on your network…if not, good luck in getting everything adequately patched.
6. Meet and brief managers – Most of the time, the above 5 steps aren’t going to be terribly interesting. Step 1 might be interesting only because of the sheer number of “suspicious” connections that may or may not be around. Eventually this task will numb managers and the meets will turn informal and then non-existent. I think it would be more efficient to do this once a week.
7. Check more logs – Ok, I think this author is envisioning someone doing this job and only this job. All they do is check logs and security patches, kind of like a junior NOC operator or something. IDS/IPS logs should be checked, yes, but typically they are less useful than someone checking Snort or running some robust Linux tools for analysis.
8. Turn knowledge into action – This is a good step, but should be part of every piece mentioned above anyway. Take your information and work to either get better information, massage down the unnecessary information, implement changes like security patches, and research new tools to do all of these steps better.
conclusion – Over all, this sounds like a really cakewalk sort of job, and likely all that someone who followed these steps would be doing every day. Unfortunately, the reality is different and most admins seem to need to wear various hats or attend to other projects. These steps above are typically the first things to go when time is short. That’s not ideal, but that’s reality for most of us.
linux as main box – part 6: oh to mount NTFS
I took the time needed to get Thunderbird all set up with my email on my Linux install. This was very easy since I use Thunderbird on Windows and was already quite familiar with the app. Good times!
I still need to get my hands on a legit or properly cracked (and still working) version of Windows XP Pro so that I can finish my VM install. I really want this so that I can run a few random little things that I need to run in Windows (like Ventrilo).
Next on my list is to iron out mounting my external hard drive with write access. The drive is saved in NTFS, a Windows standard. While there are tools and ways for Linux to write to NTFS properly, there is still (after numerous years) disclaimers saying that the whole drive may still get hosed up. So I need to dig out another drive and perform a full backup of this external drive. I need to do this anyway as it has been a while since I backed it up. Either way, this shouldn’t be a huge deal. Copy data over, install the NTFS tools on Ubuntu, mount the drive, test out write/delete/move functions. Done!
I also started playing with the new tools that Linux opens up to me. I installed kismet and played with it a bit, far deeper than I’ve ever played with it before on livecds like BackTrack. I even got to figure out how to edit shortcuts, the Gnome desktop layout, and application menus. More good times!
sysadmin jokes for your manager
Just a couple ideas for office pranks on the managers.
1) Order up some jars or vases (the more magical or Alladin-like the better, add in cork tops too!) and fill them with colored sand. Either solid colors or even do that cool layering for a more rainbow-like effect. Keep the jars of sand on your desk and label them: “Malware cleaning,” “speed booster,” “erorr fixing.” Then when your manager comes by asking about an error or problem on a server, wordlessly choose the appropriate jar of sand and disappear into the server room…
2) Get a bit of white sand or salt and make a line of it on a server room desk (I don’t recommend in your cube in case someone reports you!) like it is a line of cocaine. When the manager finds it or walks in while you’re slaving away on some important downtime, let your manager know that they’re driving you so hard you have to do cocaine just to keep things running.
malware detects VM use and prevents execution
This presentation discusses new techniques associated with malware detecting the use of a virtual machine. Researchers typically examine malware on virtual machines. If malware can detect use of a virtual machine and then prohibit execution, reverse engineering the malware becomes a little bit more difficult. Could this mean running a thin client connected to a desktop virtual machine might be more secure? Perhaps, but I think it will be more likely to result in some really bad malware should any of the virtual drivers or virtualization software have any vulnerabilities discovered. It is a bit disappointing still that the virtual machines can be detected (beyond just the drivers saying “vmware display driver,” for instance. Then again, it might be asking a little too much to expect VMs to be indistinguishable from physical systems.
detecting virtualization
Running malware in a virtual machine is common for researchers looking to examine the effects and even reverse engineer the malware. This presentation goes into some of the new techniques associated with malware detecting the use of a virtual machine in order to stop execution and prevent reversing. Of note, if we can have malware execution stopped by virtual machines, could end users be a bit safer by using desktop systems as virtual machines (with a thin client front end)? Or perhaps will malware be able to specifically sniff out and target virtual machines if some vulnerability were found in the, say, virtualized drivers?
malware analysis: free video codec
This malware analysis is amazingly interesting to read. While not too deep, technically, this is the kind of analysis that is not really beyond any typical sysadmin or desktop support person.
A few points on why this is significant.
1) The malware is downloaded via social engineering someone to download a free codec in order to play some video. This is not atypical behavior, in fact, I see this every now and then on legitimate (non-porn) movies and happily go searching for codes or just let it auto-check and install. A typical user will be fooled by this attempt, as could any user searching for the codec randomly (if you need a divx codec, you hit divx.com, you don’t randomly search for and install the myriad odd “divx” codecs from mysterious sites).
2) The malware took over the DNS queries of the system and even actively took over browsing targets in IE. It is possible this malware could return commands via DNS responses? It is definitely possible, as the analysis authors mentions (I really like when authors illustrate just how bad things could get with a piece of malware), that false DNS requests can be given. You want Windows Update? No, you want our site to download false Updates with more malware! I’d really like to see some packet captures of the results, if they are abnormal in any way.
3) Just goes to show that if malware can get you to execute a file on your system, that system is no longer your system.
the road to web 2.0 – myspace is out of place
If we’re in web 2.0 right now with Gmail, Ajax, Ruby, YouTube, Flickr, and so on, what was before that?
web 0.1 – The first web sites; not much to speak of, and I doubt any still exist.
web 0.5 – Around 1995-1998ish with the annoying proliferation of flaming torches, animated rainbow lines, embedded midi, and terrible design. GeoCities is a household name (albeit in geek households).
web 1.0 – Everyone can be a web designer, and designs actually started to mature and not look quite so “GeoCities.” Embedded midi is out. Animated gif attacks are out. Stylesheets and databases are in.
web 2.0 – Not everyone can be a web designer. Programmers and extra-mile languages are taking over to offer full application-style sites. Objects are in, playing with code is out. The tools are sophisticated enough that web newbies don’t need to code, they can click buttons, sliders, toggles, and otherwise drag-n-drop content.
So, where does MySpace fit in? The answer is, it doesn’t. MySpace resembles web 0.5 with annoying embedded musics, terrible designs, and atrocious layouts. It really is a modern GeoCities (now, there are many people with very nice-looking sites, but random browsing on MySpace is an exercise in ugly).
But so many people and bands and groups are posting there and using them to host their official sites. This means that MySpace either needs a makeover to become Web 2.0 compliant, or someone will take that space over and offer exactly what MySpace offers, only easier, prettier, slicker, sexier, and modern. Considering the “ugly” stigma that MySpace has, getting people onto a new service that is better shouldn’t be much harder than Google toppling Yahoo back when Yahoo went out of style and Google was “it.”
wireless driver flaws highlight 2005
I was putting up a list of things to “predict” for next year, for my own amusement. It looks like one is coming true sooner than intended as the Month of Kernel Bugs has released a second wireless driver flaw along with Metasploit exploit.
There are three reasons this is huge right now: 1) lack of patching channels, 2) lack of hardened drivers, 3) and growing emphasis on mobility and wireless.
While Windows and other OS and software apps have various levels of seasoned updating and notifications, the driver community has no such luxury. In fact, neither do the corps who use hardware drivers like Dell, Gateway, HP, and so on. Customers are really on their own to know there is an issue, know how to find the right driver (still easier said than done on most of those sites), and install it properly (still sometimes a very arcane and archaic process).
This is a huge mess that isn’t waiting to happen anymore; it’s happening now. I now predict that 95% of all affected systems will not be patched until they are either rebuilt or retired to a garbage heap.
Second, drivers have long been relatively untouched in the media, and as such all their vulnerabilities and code issues have remained in the underground, if anywhere. But combine wireless proliferation, fuzzing, and virtualization, and it was just a matter of time before hardware drivers got the evil eye. Sadly, driverland is not ready for such attention, and I expect a lot of vulnerabilities to be exposed in the next few years in various hardware devices. The code is soft and not hardened over years of exploits and poking.
This is also important because of the growing prevalence of widespread wireless capabilities and laptops roaming around all over. And how default settings leave wireless network cards turned on. All it takes is a running laptop with an active wireless network card to be exploited. It doesn’t even need to be associated with a network, and it can be rooted. It can then, possibly, spread.
I also predict there will be some wormable exploits popping up, but thankfully should only be problems in larger hotspots like airports or college campuses or muni-wifi implementations. However, this could still slowly spread from laptop to laptop in an apartment complex or metro area.
security denial by lack of action
We have a lot of denial about security in our society right now.
Many people will admit, sometimes after a few thoughts, that breaking into someone’s house is typically not that hard. Watch “It Takes a Thief” on Discovery and you’ll see that the same fundamental issues occur most of the time. But as much as people will grudgingly admit how easy it could be, that is typically just un-thinking lip service. Very few people, inside, admit they can be victimized. Very few people take the time to implement fundamental security measures that greatly impact the risks of a break-in. Something as simple as a security alarm and proper locks on doors. But yet, very few people do these things…and then shed tears and feel violated when they do suffer a break-in. Do we just like to pretend it won’t happen to us? Or do we just not want to spend the money or the effort? Typically, all it takes to break into someone’s house is a little bit of effort and some balls enough to overcome the internal sense of right and wrong.
Identity theft is still very easy to accomplish. But most people, while they will grudgingly admit that it is easy, still make little to no effort to protect themselves.
Security is often something that is talked about, but never truly taken seriously enough to change behaviors until after a security event. I would bet it is unanimous amongst people who have suffered a break-in that they wish they would have had more measures in place, and I bet most have them now.
At any rate, it is interesting that security can be something that sounds good when people talk about it, but they still too often end up doing nothing, and by that lack of action, end up denying that they can be victimized.
least user access
I almost always read “least privilege” or “least user access” and click into the article wondering what it will be. Without fail, it is always about that age-old discussion on whether users should be running as admins on their local machine or not.
What about the other aspect of least user privilege? Namely, the file servers. How are company file server resources allocated? How are requests for access to information handled? Not everything is in databases or web applications. So, what about this very important topic?
I wonder if this is because very few people understand the nuances of managing security permissions in anything but a tiny environment (at least, the IT journalists anyway). While it might seem easy to isolate developer files, what about when we start talking about collaboration or dynamic teams that span multiple departments?
Weird, considering I would expect many organizations to be very bad about tracking and reporting on actual user access or even managing that access at all.
on the forefront of technology
A quote from an ITBusiness article:
“You gotta be mobile, regardless. While it may pose great [security] risks, its a greater risk to fall behind,” Levy said.”
It goes without saying that you can’t let your networks and systems linger and gather dust so much that we get another, “it’s 2004, why are you still running Windows 98 systems?” situation. As support drops off, so to should use. Just look at SCADA systems on what not to do…
However, there is still something to be said about being on the forefront of technology and to not be sitting around playing catch-up five years behind or more. I think it could help IT perception if IT were closer to the forefront of technology and enabling and assisting employees more. This might be a bit dangerous in some cases, but I think in most cases the only real danger is just overspending on new things that may or may not work out in the long run. Thankfully, technology these days does not necessarily have to be a bad decision made that will last 20 years…or even 5 years. Everyone in business makes mistakes. IT should be held in no different regard. If we move forward with mobile devices before they become fully mainstream and it doesn’t work out, so what?
I could go into a lot of the benefits and risks and goods and bads, but I think it is interesting to imagine the change in approach when it comes to just doing some things, and figuring out the security later. Perhaps this is a bad idea for most, but it is still something to always think about. Why wait 3 more years before encouraging mobility in the organization? Why not just do it now and deal with the risks, issues, and technology? Why wait for users to clamor louder for IM, and instead move forward with dealing with IM in the organization now?
Now, this is weird for me to be saying. I typically am not an early-adopter. But I do have an excuse. In college and beyond I have not had a very large amount of leisure money at my disposal in order to delve into new things. My attitude is certainly ready to change now that I am crawling out of debt such that I can see the edge clearly now.
Another quote from the same article:
“Levy suggested that access-based protections (like dual-function authentication) are imperative, and end-to-end encryption is necessary. These technical failsafes should form the foundation for rigorous employee training from the IT department, said Levy… The employees need to become experts in mobile security, he says.”
I don’t like this statement. I think the average user needs to get used to doing things with security in mind, but it is ridiculous to request that employees become experts in mobile security. Mobile security is tough enough for professionals working with it every day, let alone everyone else trying to do their own jobs. While training is necessary and employees do need to be at least a little bit security-conscious and accepting, it is up to technology and technology professionals to be the experts in security. We do not expect everyone to be an expert about the internal workings on their car or the proper use of complicated and ephemeral security measures. Instead, they just work, they just do their thing, and we take our cars to the professionals for anything beyond our control or understanding.
month of no posts
Wow, it looks like I’ve gone an entire month without making a post here. That was certainly a quick month, and I do have a backlog of things and links and tools to look at and post about.
My reasons for the lack of posts is two-fold, really. First, I have been holding back on a lot of stuff since I really want to convert this space into more of a wiki-format. A wiki is much more appropriate for what I am using this site as. I had some issues last month in getting Apache 2 and PHP5 to get along, so I have to check and see if that was resolved.
Second, I’ve moved a lot of my more discussion-style technical posts to my main blog instead of here. I am not sure if that is how I will do it in the future, as all my own non-technical stuff is being diluted by the technical jargon that many of my family and friends know nothing about. Maybe I’ll load it all back here once I get the wiki up, and still have a sort of techie blog/news listing on the front page.
In the meantime, I hope to post some more things here anyway, regardless of the wiki progress.