installing pidgin 2.2.0 on ubuntu 7.04 to use google talk

I recently decided I needed to use Google Talk. I don’t know why, but I have Gmail accounts, so why not buddy up to Google Talk? I use Pidgin 2.0.0 on my Ubuntu 7.04 laptop. Unfortunately, I was having no luck getting XMPP (Google Talk) to connect properly. An upgrade to 2.2.0 is in order, right? Unfortunately, nothing exists in the repositories to upgrade Pidgin. Great! When I did the following steps, I did not have to remove my old Pidgin installation, and all settings and buddies were carried up just fine.

First, I need to update my repositories list:

sudo gedit /etc/apt/sources.list

with:

deb http://repository.debuntu.org/ feisty multiverse
deb-src http://repository.debuntu.org/ feisty multiverse

Then run the following commands:

wget http://repository.debuntu.org/GPG-Key-chantra.txt -O- | sudo apt-key add –
sudo apt-get update
sudo apt-get install pidgin
sudo apt-get install pidgin-libnotify

After this, Pidgin can be started from Applications -> Internet -> Pidgin. Once the app has started, I want to connect to Google Talk. Accounts -> Add/Edit -> Add -> Google Talk.

My protocol is XMPP by default. Screen name is my Gmail login. Domain is gmail.com. Resource is left to the Home default. In the Advanced tab, I checked Require SSL/TLS, chose a connect port of 5222, and connect server talk.google.com. I left the Proxy type to Use GNOME Proxy Settings.

References
installing pidgin 2.2.0
connecting to google talk

secutor prime examines desktop compliance checklisting

I currently don’t do much desktop work right now, but it is still nice to see how a system compares to various standards. I’m not sure where I picked this up yesterday, but I got pointed over to a tool, Secutor Prime, which examines a system and compares it to various standards such as the FDCC. The best part of this tool is the feedback. Clicking on any check will give the findings and also the steps needed to pass that particular test. An excellent means to learn more about desktop security, the settings, and what compliance checklists look for.

the security silver bullet syndrome in negative exposure

It’s not often someone hits a pet peeve of mine dealing with security, but I bristled at one just now.

One of my tenets of security is to make sure to not believe there is a silver bullet or security panacea. I think we universally believe that.

But there are insinuations and beliefs that, in a way, are saying there really is a silver bullet. Most of these have to do with saying “Security measure X is not 100% effective, therefore it is useless/inefficient/expendable.”

I’ve seen this with Jericho Forum defenders who say the perimeter is porous now, which must mean the firewall is less efficient, which must mean we’re moving towards no perimeters. “What use is a perimeter defence with holes in it after all?”

Such a statement is analogous to saying, “I expect my security measures to be silver bullets.”

I don’t think I’ve stumbled downhill nearly that violently since breaking my leg sledding one winter…

some logging notes

Cutaway has an excellent interview up with Michael Farnum who talks about his experiences with companies in regards to a number of things, namely logging. Does he see companies logging, are they doing it properly, and so on. Excellent insight into what’s really going on, and not as untrustworthy as a sheet of stats from some vendor with an agenda.

In reflection to the questions and answers, here some of my bullet points when it comes to centralized logging discussions.

1. The IT team needs to see value in the process of logging and reading logs. If they don’t see value, they either won’t do it, won’t do it properly, or have no clue how to leverage it. If they don’t see value and the business sees no value, it just plain won’t get done. This probably always ends up not being a security value-add, but rather an operations one. Something went wrong with a web app, can you troubleshoot it by looking at the logs? Or a server isn’t updating properly from WSUS…and so on. Logging should be seen as important as a heart monitor on a patient in the hospital.

2. Once there is value, or maybe even before the value is realized, admins need the time to properly get things set up. Having enough time to gather Windows event logs and nothing else is going to be a wash. Same with just gathering the logs on half your firewalls. Give the team enough time to properly get things going.

3. Set aside time for the admins to regularly look at logs and maybe even “play” with the logging server. If admins don’t have time or are not allowed to use the logging reporting and querying regularly, they won’t have the familiarity to do it when emergencies or high profile incidents arise. Practice, practice, practice.

4. For the love of whatever, read Anton’s paper(s) about the six mistakes of logging.

My own logging? At home, I don’t do enough. At my last job, we did logging, but didn’t use it enough or probably use it properly. At my current job, we don’t do enough logging at all.

how do you eat your 0day?

There is an interesting discussion this week on the Full Disclosure mailing list about the definition of “0day.” Oddly, what seems like an old term is definitely not a term with an understood and universal definition. It seems to vary widely, dramatically widely. Then again, FD is a fairly argumentative list with some people arguing anything just to argue. Still, it is interesting the lack of clarity in some of our widespread terms.

My take on 0day, which I’ve used ever since I first heard the term many years ago, is pretty much the same as the Wikipedia entry. To me, a 0day is an exploit released before solutions or patches have been diseminated from a vendor. This wouldn’t mean a new strain of a virus exploiting a known vulnerability would be a 0day. But a new worm exploiting a new vulnerability would qualify. A side effect is whether something is a 0day to someone who has seen it, and provided for a workaround, even though they’re not the vendor. To me, 0days are somewhat unstoppable exploits, mitigated by defense in depth / layered defenses.

And don’t even bring up “less than 0day,” as I feel dumber each time I hear that term…

unisys and dhs security debacle

The other day I posted about Unisys and the DHS. After seeing a post from Bejtlich, I see they’re fully wading into it together. Ugh.

While I won’t defend Unisys, I’ll play Devil’s Advocate for just a moment. Was Unisys just providing the systems and process and DHS was meant to actually put things into operation? And I wonder if there were any obstacles imposed by DHS that prevented things like IDS systems being implemented? I know it can be a pain when you’re asked to install ABC onto 45 systems, but half of them keep telling you they’re too busy and to try again next week.

It obviously sounds like Unisys made some really poor decisions, but I’m curious on the extent of them from Unisys and from DHS itself, if any. Thankfully, this is the transparent government and not private companies, so we get to watch the laundry shake violently in the wind.

when terminal/server is reinvented as desktop virtualization

Ever read an article that makes you kinda stop anything else you’re doing as you try to make sense of it? Then read it again, which doesn’t help…then read it in bits and pieces to see if you can make sense of the parts in order to tackle the whole? And then maybe still wonder what sort of crack the author is on? I had that this morning reading an eWeek article, Analysts Predict Death of Traditional Network Security. I guess there’s a reason I didn’t re-up to eWeek a few years ago. And it is just coincidence that the topic is de-perimeterization and mentions the Jericho Forum, I swear!

According to them, in the next five years the Internet will be the primary connectivity method for businesses, replacing their private network infrastructure as the number of mobile workers, contractors and other third-party users continues to grow.

…So the Internet is not already a primary connectivity method? I guess I underestimate the Frame Relay and dedicated links market dramatically!

One of the end results of the death of traditional network security will be a growth in desktop virtualization, Whiteley said.

Hey, that’s kinda cool to read. In fact, we’re right now doing some desktop virtualization for mobile employees, particularly developers offsite. They VPN into our network with a system, then Remote Desktop into a virtual machine on our network upon which they work. Odd…I never once thought of this approach as being part of de-perimeterization or the death of the nebulous “traditional network security.” It’s a way to avoid bandwidth restrictions and data egress.

Desktop virtualization allows a PC’s operating system and applications to execute in a secure area separate from the underlying hardware and software platform. Its security advantages have become a major selling point, as all a virtualized terminal can do is display information; if it is lost or stolen, no corporate data would likely be compromised since it wouldn’t be stored on the local hard drive.

And this is where we finally stop toeing the brakes and actually put some pressure down on the pedal. I don’t think the author was involved in something called terminal/server architecture before, since that’s what he decribed. He did not describe desktop virtualization. Maybe we’re seeing the bastardization of terms…which is unfortunate. There is a point to be made about moving to virtual desktop systems and also moving back to terminal/server setups, but it really has nothing to do with de-perimeterization or the use of the Internet to connect businesses. It has to do with support costs, desktop OS compliance activity, and data security. All of which are vague and ubiquitous enough to “support” pretty much any security theory or initiative. Part of my religion is predicated on you breathing regularly. If you breathe regularly or believe in breathing, then you support my religion. Um, no.

The adoption of PC virtualization would mean companies would no longer have to provision corporate machines to untrusted users, Lambert said. Desktop virtualization simply equals a more secure environment, she said.

Hrm, I don’t follow that reasoning at all. In fact, this is a three-punch combo in confusion. People provision computers to untrusted users? Desktop virtualization means you don’t have to provision anything now? And somehow that makes things all more secure? I’m feeling nauseous…

I think the author and the people quoted in the article (Forrester analysts) need to take a step back and iron out what they mean by desktop virtualization and how that compares to the age-old terminal/server environment, and move forward from there. But some of these conclusions just don’t follow, and the muddiness of the terms and logic makes the article a waste of time.

switch basics: loading up a wiped cat 2950

Holy crap 9600 baud is slow! I’m doing something different in loading a wiped switch, and I thought I would use an xmodem transfer. Go me! Since this is taking so long, I may as well post some switch basics as I go. (To note, my earliest speeds on the Internet were 14.4kbps modems back in high school.) I’ll also go ahead and put on some background music, the excellent Dubnobasswithmyheadman album from Underworld (a favorite!).

I have a completely wiped Cisco Catalyst 2950T switch. Even the flash has been erased (an eraser of love). If you boot it up, it gives an error and stops pretty quickly. A quick “dir flash:” will show nothing. I also have an ios version ready and waiting: c2950-i6k2l2q4-mz.121-22.EA8a.bin. For my console system I have an old Dell Latitude laptop (yeah, it’s one sexy-small laptop!) running a permanent install of BackTrack2.

To get the c2950-i6k2l2q4-mz.121-22.EA8a.bin file to BackTrack2, I decided to also test my tftp server and use tftp to transfer the file. My tftp server is at 192.168.10.108.

tftp 192.168.10.108 -c get c2950-i6k2l2q4-mz.121-22.EA8a.bin

Gosh, that’s easy. Now I need to connect up to the switch by plugging in necessary cables, including the power so that it powers on and loads. I decide to use CuteCom in BackTrack2 as my graphical terminal emulator. I change the baud rate to 9600 and click Open device. I type a few commands to get ready for my transfer.

switch: flash_init
Initializing Flash…
…The flash is already initialized.
switch: load_helper
switch: copy xmodem: flash:c2950-i6k2l2q4-mz.121-22.EA8a.bin
Begin the Xmodem-1k transfer now…

At this point the terminal is waiting for some data. CuteCom has a Send File button at the bottom where I can select the file and start transferring at the blistering 9600 speed! In fact, after writing this, I’m still only up to 15% completed. Ahh the joys of a wiped device that doesn’t even know what an IP address is yet.

i blame you for whatever went wrong to me today

Articles like this one about DHS looking to investigate a government security contractor illustrate some of the crap (normal business activity) that occurs in our industry. I’m not going to presume I know the full story or what was in the original contract or what Unisys’ opinion is, but I think this article illustrates two painful realities.

1. If DHS is attacked and they have someone to blame such as a contractor who should be taking care of things, the blame can and likely will be shifted, rightfully or not. This basically means the “information age” is not just surging along and pulling culture with it, but business culture is requiring information be saved and documented to avoid he-said-she-said crap. So unless Unisys goes the proverbial extra mile in the contract and also documents all deviations or obstacles, and because security will always eventually fail, there will always be a scapegoat. And blaming everyone else for responsibility for things is a hallmark of the 90s and 00s. (All starting with the McDonald’s woman who spilled hot [no shit?!] coffee on herself and successfully sued.)

2. The government is opening up competing bids for the contract. That means we have a major differentiator being cost/price. And we all can guess how the quality of security may follow the line of price. Lowest bid will almost certainly ensure the security is also of lower quality.

jericho 6 – my conclusions

I’ve been checking out theJericho Forum commandments (pdf) and their concept of de-perimeterization. I’m happy to have taken the time to sit back and examine their material posted. Whether I agree or not, it is useful to examine discussion and what other groups think.

1) I stand by my intial thoughts that the concept of “de-perimeterization” is old. I really bet this concept is rooted back in a time before deep inspection firewalls, and maybe even before stateful firewalls. The term is unfortunate and likely needs to be changed, unless they are using it just for the attention. If so, it works! 🙂 But otherwise, I don’t buy that de-perimeterization is the future. Sure, maybe borders of yesterday were nice and square like the state of Colarado. But today and maybe into the future our borders will be more complicated like the islands of the Nunavut Territory in Canada (am I the only one who missed the Northern Territories being split? And does that mean I don’t know my geography? …the flaw in quizzing adults about geography and generalizing the result down into child education values…). Nonethless, there are still borders and we will always have a perimeter of some sort for as long as we need any type of centralization management of systems or data.

2) The commandments do make for an excellent ideal. A possibly unattainable ideal. I’m dubious about the scale of such solutions, and I really think this framework only works on a very large scale. Anything below it really can’t be bothered.

3) On the other hand, this framework does include excellent guidelines and “rules.” Even if they are not followed to a letter, they are rooted in solid digital security concepts. We should keep them in mind no matter what ultimate framework we follow.

4) Likewise, I really think all security professionals should review what the Jericho Forum is saying, and I’d love to attend a presentation some day for even more clarification and discourse. As sec pros, we should be able to discuss such things and keep an open mind about other viewpoints. Besides, if there was an ultimate and perfect solution to our problems, I’m guessing we’d have happened upon it by now and all been wowed to the point of tears. But we’re not, and as such, any and all approaches tend to have strong points and good ideas.

5) In the end, do I care about this framework itself? Not really. It’s a great exercise, but not really actionable for me in a smaller company beyond just being informed.

jericho 1 – de-perimeterization and the jericho forum commandments
jericho 2 – the jericho forum and the de-perimeterization solution
jericho 3 – the first three commandments: the fundamentals
jericho 4 – commandments 4 – 8
jericho 5 – commandments 9-11
jericho 6 – my conclusions

jericho 5 – commandments 9-11

Continuing my smallish review on the Jericho Forum commandments (pdf) and their concept of de-perimeterization, I have just three commandments left, all under the category, “Access to data.”

Access to data should be controlled by security attributes of the
data itself

– Attributes can be held within the data (DRM/Metadata) or could be a separate system
– Access / security could be implemented by encryption
– Some data may have “public, non-confidential” attributes
– Access and access rights have a temporal component

This sounds like a Mandatory Access Control system where data contain attributes which determine access and use. This is a bit odd, since I have only heard of this system used by governments (classified, unclassified, top secret…).

This also sounds like DRM, which, nicely enough, is mentioned by term in the bullets! One problem with DRM and metadata is forcing adherence to the metadata or DRM (let’s call it collectively DRM for my own sake). What if you have metadata that dictates FileX should only be used by 15 people. What if I come in and read FileX but decide to ignore the DRM tags? Is this another form of encryption? Why can’t I just leverage the DRM to get the data and then move it elsewhere as a copy? Sounds familiar? It should, since we’re seeing how useful or futile DRM processes can be with media and copyright.

MAC has worked for the government and military for a long time, but I think that has to do with a) the rigid discipline of the military and secret organizations, and b) the long-term habitual, forced use of it. Can this be as rigid and forced globally? At this point in time, I can’t see that happening in the foreseeable future.

Overall, oddly, I do like this commandment. Even if I don’t buy into the specified mechanics, I agree we need to focus on data. Not to the exclusion of the network or systems, but focusing on the data needs to be part of the security equation.

Data privacy (and security of any asset of sufficiently high value)
requires a segregation of duties/privileges

– Permissions, keys, privileges etc. must ultimately fall under independent control, or there will always be a weakest link at the top of the chain of trust
– Administrator access must also be subject to these controls

Hoo-boy..this is a tough one. This commandment pretty much ensures that data protection solutions will be complex. Ultimately, you do need someone who turns the keys when it comes to protection. Maybe two people, or three, but someone somewhere will either have the power or a collusion of forces will have the power. And that’s in extremely complex setups for separation of duties/privs.

But even if this commandment is complex and maybe ultimately not of interest or achievable to most organizations, this is a good guideline to try to achieve. Most everyone has domain admin credentials and a need to create accounts in an organization. These tasks/privs can be separated to various people with various auditing and authorization chains.

Is this scalable for small companies with 1 IT person, or even medium-sized companies? Good question, and likely not. Even in my current team of 5 network guys and 3 desktop guys, we really don’t have the corporate interest in slowing down our processes to achieve this idea ultimately. We do so for a couple tasks and privileges, but otherwise it is just not worth our time to figure out.

By default, data must be appropriately secured when stored, in
transit and in use

– Removing the default must be a conscious act
– High security should not be enforced for everything; “appropriate” implies varying
levels with potentially some data not secured at all

In other words, default should be secure. If you want it less secured, you have to choose to unsecure it, or back down on the security controls to an appropriate level. Sounds good to me, although I think this commandment is much more attainable in closed networks, i.e. networks with boundaries.

Oh, wait, hold on…did I say networks with boundaries? Yup! Networks with perimeters! Without perimeters…well, that means either the whole Internet needs to run on new protocols (which I believe the Jericho Forum would like to see happen) or we need a global IPSec (or encryption/PKI) setup that is trusted by all. Ack.

Of interest, it seems this is the only commandment that allows some leniency. Someone determines what is appropriate, rather than blanket, rigid statements like most of the other commandments. Quite interesting to have a subjectivie commandment in here, but still appropriate.

jericho 1 – de-perimeterization and the jericho forum commandments
jericho 2 – the jericho forum and the de-perimeterization solution
jericho 3 – the first three commandments: the fundamentals
jericho 4 – commandments 4 – 8
jericho 5 – commandments 9-11
jericho 6 – my conclusions

sending emails with powershell

Sending emails with PowerShell is pretty straightforward. Emails can be sent either by default through a normal SMTP server or they can be dumped into a local instance of IIS to be picked up and delivered. Both can be useful depending on the situation.

First, I want to prove I can send email from my workstation through the fictious mail server at mail.server.com. Each of the .Send method arguments can be string variables if needed.

$smtp = new-object Net.Mail.SmtpClient(“mail.server.com”)
$smtp.send(“mike@server.com”,”mike@server.com”,”test”,”test”)
$smtp

Host : mail.server.com
Port : 25
UseDefaultCredentials : False
Credentials :
Timeout : 100000
ServicePoint : System.Net.ServicePoint
DeliveryMethod : Network
PickupDirectoryLocation :
EnableSsl : False
ClientCertificates : {}

This next example dumps the email to the local IIS instance. Just change the DeliveryMethod and then send the email as normal.

PS> $smtp = new-object Net.Mail.SmtpClient
$smtp.DeliveryMethod = “PickupDirectoryFromIis”
$smtp.send(“mike@server.com”,”mike@server.com”,”test”,”test”)
$smtp

Host :
Port : 25
UseDefaultCredentials : False
Credentials :
Timeout : 100000
ServicePoint : System.Net.ServicePoint
DeliveryMethod : PickupDirectoryFromIis
PickupDirectoryLocation :
EnableSsl : False
ClientCertificates : {}

I consider this a major part of scripting of any type: notifications. It is not necessarily enough to just log something if you never check logs. I’d rather throw something to the foreground, which includes an actual error, or in the case of a daily notification that a script has run, a quick email to my Inbox. This can even complement logs, such as with a log tail script that emails on certain events. Among many, many other uses.