.: September 2008 Archives
The first rule of using RFID is don't talk about RFID issues! At least, that may be the gist of this story I read from Dan Morrill on how the Mythbusters are prohibited from airing an episode on the insecurity of RFID chips
. This was over on engadget
as well as other places, just to throw some links around. If you have to supress information on insecurity, you have problems to fix!
You know, if a credit card company could implement RFID properly and securely and openly have it vetted and tested and beaten against, they might find some value in that. Knock-offs and theft aside, everyone should strive to be secure enough where full disclosure would not break the entire product/system down.
by michael 09.02.08 at 1:02 PM in /general
No surprises here, Google Chrome is out (beta). Their terms of service are sketchy (albeit a generic TOS). I used to love Google back when Yahoo went public and I no longer trusted Yahoo or found their site as useful. Now Google is public and I just can't trust that "Do no evil" will ever again trump "Make more profit." I'll likely try Google Chrome at some point, but I expect Google to harvest all the data they can from its users. And thus, I just don't at this point trust it.
(Hell, it already annoys me that Firefox 3 makes constant checks to Google's safesearch by default...)
By the way, does this mentality of distrust automatically make me more old school in IT security? :) There's a lot of wishy-washy business-kool-aid drinking people around these days... Distrust, full disclosure, researching on personal time...these things still seem like somewhat necessary traits for a healthy security culture?
by michael 09.04.08 at 6:20 PM in /general
I posted about Mythbusters vs RFID
a few days ago. In the interest of equal representation of stories, I wanted to post this one I saw that suggests the Mythbusters chose on their own to not pursue an RFID security
episode, rather than the report they capitulated to lawyer demands.
MythBusters co-host Adam Savage is stepping back from public comments suggesting that legal counsel from several credit card companies led the Discovery Channel to pull the plug on an episode dedicated to security holes in RFID.
Where does the truth really lie? Who knows. Savage may have just come to his own erroneous conclusions or he might have been pressured to clear the air. I doubt we'll ever really know when it comes to media and media relations and that whole public song-and-dance.
by michael 09.05.08 at 11:15 AM in /general
I don't read Schneier's blogs. Why? Because everything cool he says will get linked or sent over by other people I read. So it was with Schneier's latest essay on security ROI
. An excellent article, although it echoes what others in the industry (including myself) have really kinda known for a few years now. But he concisely brings up the issues we have when trying to value threats, risks, and countermeasures in formulating ROI.
Before I get into the details, there's one point I have to make. "ROI" as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.
In the end, this is just all so much guesswork and the only things you can count on are using such measures as a general guideline and trying to be as consistent as possible when measuring and using them.
As usual for Bruce's blog, the comments are many and fairly well-informed. Skimming through them reveals just how difficult the idea of security ROI or security cost really is, and possibly how non-universal every "answer" is.
So, we harp about FUD, but isn't that what you have to do in the face of a lack of ROI? Is that how insurance sells itself, whether spoken or just subtly implied?
by michael 09.05.08 at 1:01 PM in /general
I got an urge to install the MultiBoot-ISO
I recently posted about
. I picked up a cheap 8GB USB stick
from Newegg. In order to install the .iso file to USB, I needed UNetbootin
. I really like instructions, even if the steps turn out to be simple, so I'll detail my adventure below.
I'm using Ubuntu 7.10. Download UNetbootin
to the Desktop. I did this via the torrent link, which took almost a week to download at 4 GB in size (I used Azureus
, which appears to now be named Vuze). There are two dependencies that need to be installed before running UNetbootin: mtools and p7zip.
sudo chmod +x unetbootin-linux-275
sudo apt-get install mtools p7zip p7zip-full
This starts up the rather simple GUI. Select the radio button for a Diskimage ISO install. Navigate to and select the MultiISO-1.0.iso in the GUI. Down lower, make sure USB is selected.
Inset the USB stick and let Ubuntu mount it. I need to find this mount point, and Nautilus isn't imediately helpful as it tells me /media/PATRIOT. Thankfully mount will give me what I need.
/dev/sdb1 on /media/PATRIOT ...
Back in the UNetbootin GUI, select USB Drive as the type and /dev/sdb1 from the Drive dropdown. Reverify everything to avoid accidentally installing to the local disk! Click OK to start. For such a long download and large file, the install takes maybe 4-5 minutes, which is mercifully nice.
by michael 09.06.08 at 11:09 PM in /general
Renderman has linked to video of his Defcon talks
, both of which I ended up missing despite really wanting to go. I didn't even know about his room size problems until just now, so I suppose I saved myself some frustration in missing that talk. Rather than download and open a local copy of his slides, Kaminsky has embedded them in a post of his own
by michael 09.08.08 at 10:14 AM in /general
There are more live cds around than you can shake an octopus at. And more than I could ever really evaluate effectively. So here are two that I have heard about but have not had a chance to try.
appears to have been released around October of 2007.
is another live cd, although I have no idea when this was released.
It appears both distros are looking to fill gaps in hardware support to perform wireless packet injection.
by michael 09.10.08 at 3:17 PM in /general
, I just came across a lengthy post on webmail password "recovery" techniques
by Gunter Ollmann from IBMs ISS.
For a while I've been pretty blasé about CAPTCHA hacking developments. I mean, why do I care if a CAPTCHA can be beaten?
Well, that's because password "recovery" agents can employ distributed botnets to bruteforce web apps. And the typical response to too many invalid attempts in a certain amount of time is to throw up CAPTCHAs. Well, if those are being broken, what's next?
Sure, you can do a lockout period or flag it somehow, but you either are going to introduce DOS scenarios to legit users or start some futile investigation into what you already know: botnet bruteforce, good luck! So, yeah, it drives home the significance of CAPTCHA breaking.
Towards the end, Gunter talks about what you can do to prevent this or minimize the damage. I'll include those steps and some more here in my own list.
- Use a strong password.
- Probably change it regularly, especially if you have no way of knowing if someone is currently accessing your account or not.
- Delete your email archive/history. Do you really need all that garbage?
- Don't keep passwords and "Welcome to site blah" emails. Keep them where you keep your passwords (PasswordSafe?) or bookmarks (uhh bookmarks?). This way your archive doesn't give juicy leads on where to trigger password reminders from.
- Keep a few fake emails in your inbox with subjects like, "To reset your password go here," or "here is your SSH connection info." In these emails, provide an IP address of a server you control or can get the logs from, and see if someone other than you ever attempts to connect. Or maybe an email or two with embedded images from a remote web server that you control. If someone connects, alarm klaxons can sound.
by michael 09.11.08 at 4:00 PM in /general
I'm not going to dwell on this much, but there are reasons why I don't and never will use iTunes. This latest rash of Windows BSODs
possibly caused by an iTunes silent install is more fuel on various fires.
Of note, I'm not surprised by this. I expect the three-way war between Microsoft, Apple, and Google to get ugly, especially if causing issues on, say, Windows, helps position your own OS or alternative delivery systems...
To bring this around to security, I'd be surprised anyone with a security/hacker ethic would be fanatically entrenched in any of those three camps. :)
by michael 09.11.08 at 6:20 PM in /general
Grossman and RSnake have gone public
with wind of a new, critical enough vulnerability in, well, the web I guess, for lack of further details. Rather than disclose their findings either online or in an upcoming talk, they have opted on their own to pull their information until the major browser vendors can get their stuff patched. Kudos to that!
In the meantime, we get to know something evil is afoot, but we have to just worry and find religion until the vendors roll out patches and we get them rolled out in turn. RSnake mentioned both browser vendors and web site operators...ugh...sounds big and complicated and I certainly wish I knew what to look for or how to even workaround it or spot any symptoms in the meantime.
I 100% understand the reasons for performing responsible disclosure, and I don't necessarily directly disagree with them. I don't find it useful to disagree with someone's opinions, as a general rule in life. Ten years ago disclosure was about throwing the information out there. In recent years, it is very fashionable (and even defendable) to follow "responsible disclosure" practices.
But I predict "responsible" is going to change in the next few years. Right now, "responsible" is defined by the vendors in an effort to protect their business and products (and scare researchers with lawsuits). In several years from now, I see the pendulum swinging back over to "responsible" being defined as how the public is affected, especially the security people representing and protecting the public. Of course, that opens the debate of whether it is more secure to disclose bugs (thus giving bad guys the information) or withhold them (not allow anyone to know how to protect themselves).
In the end, this keeps the general user at the mercy not just of the bad guys, but also a handful of security researchers and the vendors. But maybe we overestimate the bad guys? Maybe we have no idea what the real impact of disclosure is? If they disclosed, would the Internet buckle as thousands of people are collectively owned in 12 hours, 2 days, 1 month? Would it kill Browser X in the browser wars? What if Kaminsky had disclosed earlier this year and gave the vendors little to no lead time?
I certainly have no answers, but I do tend
to fall on the side of full disclosure more often than not usually because it is the side that tends to have less dependency on assumptions and unknowns that may never be quantified. I stress tend because there are exceptions, and I don't actually fault someone who has chosen the other side.
As a final parting shot, we can take both practices to an extreme. In one side, we have lots of hidden knowledge that few people know and understand. In the other, we have collectively more knowledge and ability to protect ourselves or improve the bottom line in the long term, perhaps at the expense of the 0day period. One promotes the sharing of the general concept of Truth, and the other stifles it. It is my opinion that truth, like information, tends towards freedom. It is people (especially those with something to lose, like power) that fight against that tendency.
by michael 09.18.08 at 8:31 AM in /general
I usually hit a local bookstore over lunch and the other week found a book that made its way to the Computer Security section: Arrest-Proof Yourself: An Ex-Cop Reveals How Easy It Is for Anyone to Get Arrested... [long ass title]
by Dale C. Carson. I picked it up and started to check it out, and am quite glad I did. And at a price of $15 on the cover, I consider it a borderline steal even though I just pick it up and read at the store and am not buying it. It is amazingly accessible and fun to read.
The book has a lot of content for people who do recreational drugs, so much of that doesn't apply to me. But it has many things that are useful to know, especially to someone in the security space.
For instance, and this is in the title, but I hadn't realized how easy it could be to get arrested. And once you get arrested, that is on your digital record forever, even if you did nothing wrong and never went to court. It still will haunt you. A single arrest on my record for being stupid could affect my career.
Police may not have the proverbial quotas of traffic tickets to give monthly, but they truly are rated on scorecards based on arrests made and other items. And not by the quality of their arrests or how often they were helpful or respectful to citizens in need.
by michael 09.18.08 at 1:03 PM in /general
Too often there comes a time when a favorite program or website "sells out" and becomes bloated with features that are either not necessary or add to user distrust (why do they want to store my browser history so long...?). I've long been a big fan of Firefox. But, Firefox finally hit that treshold with me at version 3 with their damned nosy and unwieldly location/address/url bar. I think they actually do call it the "Awesome bar," which to me smacks of development/management going too far just for the sake of developing new stuff, awesome or not. I don't want rich results, or my bookmarks displayed, or some fancy frecency rating. Give me a location bar the holds a small history in the dropdown of things I typed or visited that only goes back as far as when I last closed down firefox, and autocomplete only based on that list. That's not hard!
I shouldn't have to find a browser add-on to give me my desired previous location bar behavior, but in this case I had to. The add-on Old Location Bar 1.3
gives me exactly what I wanted. It is experimental so you have to log into AMO to get it, but if you don't want to bother, there is a link embedded in the long description if you scroll down a screen or two.
by michael 09.20.08 at 10:19 AM in /general
I wanted to point over to an article on compliance checklists and security
by Bill Sieglein.
Over that 15 year period my attitude about using checklists to ensure the existence of security controls has shifted as well. Early on we were begging for some standards and checklists to compare against. Later on we realized that using checklists can lead to a sort of 'tunnel vision'. Now that the list of regulatory requirements that most organizations have to comply with is growing unmanageable, I am seeing folks lean back on checklists again just to ensure completeness.
To me, checklists occur for several reasons (Sieglein actually mentions #1 and #3 in the article).
- We don't yet know enough to make our own decisions (or our own checklists!).
- Stakeholders often tend to live in ignorance of insecurity and feel good that the front door is locked, even though a window in back behind a bush can be easily jimmied (home security is a great analogy to how many business stakeholders treat security). I'll be curious how this works out with the other companies hacked by the TJX hackers that didn't know it until the feds informed them...
- There is too much information to digest, so we try to condense it to a checklist.
And there is one other issue with checklists that Sieglein somewhat touched on but I wanted to flesh out.
Checklists are basically a binary measure, check or unchecked. Unfortunately, security is not always a binary practice. Ask any security dood "Is it done?/Is it secure?" and the answer will always be either "No," or heavily qualified. Overcoming this by sticking to checklists means making huge ones, or filling them out extremely regularly.
I'm not against checklists. They have a necessary place in our security environments. We use them to ensure consistency in our work and our expectations/requriements. We use them to give someone else a quick glance into our world. We use them as easier-to-measure data points over time. We use them to organize our own sea of duties and information... But like good beer, they can be enjoyed but they also need to be tempered a bit to avoid falling down into a bad hole (hole, tunnel vision, same lack of vision).
by michael 09.24.08 at 10:56 AM in /general
It is my growing belief that IT is assumed to be an idle entity until a call/issue/walk-up comes over. At which point we spring into a swarm of action like the Tazmanian Devil.
The reality, of course, is that we're always doing something or have a list of projects and tasks to take care of, let alone all the "wish I could do" things.
IT is like a check-out line that already has people in line waiting. But we're often expected to be more like an empty check-out line. (And, of course, it's our fault when people are in our lines!)
by michael 09.24.08 at 11:41 AM in /general
Dan Morrill posted
about the Hacker Profiling Project
, and I thought I'd wax on about it a bit. (Can you profile which blogs I read when I'm swamped and have no time?)
My initial reaction is to scoff at the idea of profiling a group of people who tend to be very independent and free-thinking. I groan at the horrible use of the term "hacker." And I laugh at some of the questions on the site (ugh) which could fit almost every teenager I've ever known and beyond. Basically, I already don't find this useful. At all.
This is an amateur opinion. I've done no research and really don't care to, but these are just my Monday-morning quarterback thoughts. I've seen CSI, X-Files, Silence of the Lambs, Hackers, and Mindhunters, so I feel I can comment on this topic. Oh, and I can browse Wikipedia
too. Ignore my abuse of the term "hackers;" they started it.
I'll buy that one can profile hackers to a degree, but I would suspect that such profiling is ultimately measurably (if not greatly) less accurate than profiling other criminals such as murderers, sociopaths, arsonists, and abusers. I'd suspect that profiling either nets far more people than are criminal hackers, or catches the same obvious ones other profiling methods catch (bed-wetters, torture small animals, history of abuse, etc).
I'm a prime candidate for never being profiled. If I were to decide to become a serial killer of some sort or other, no one would be able to actually profile me terribly accurately. I'd just be your average suburban white Joe who, in a vacuum, decided to commit fairly random acts of violence despite my unremarkably ordinary upbringing. You can't predict or track that. It helps that I don't associate with any negative influencers, which might artficially hasten my crossing over the line of a first offense.
Much of the profiling seems to occur based on one's propensity to cross that moral line between good and evil. Either one has a warped sense of that line (childhood nurturing), or one has already crossed it (repeat offender who escalates).
I've posted about it in the past, but I like to refer to an old study which I can never find links to that put a group of strangers into a room to hang out. Their social behavior was then studied. In a separate group, they were also put into a room to hang out, but in this case with the lights turned off. It can almost be obvious to guess which group got into a little bit more michievous trouble with touching and exploration.
Typically, hacking is far removed from actually physically harming a living thing. It is more than one level removed. It is easier to profile someone who takes sexual pleasure in death or killing than it is to profile a spree killer, which is easier to profile than someone who beats someone else to death on accident, which is easier to profile than someone who engages in regular fighting, which is easier to profile than someone who drives a Tahoe which guzzles gas and leads to famine and death in Africa because of messed up economics. Ok, so I jumped a bit there...
By the way, is it coincidence that profiling seems to be a very individual affair? You don't profile groups of people doing bad things, you profile individuals with psychological habits...
Likewise, it is obvious when someone is harmed physically, but not so obvious when one's actions negatively influence the well-being of someone else. Think of the difference between spitting in one's burger in the kitchen versus spitting in it right in front of the customer.
So, we have a propensity towards more mischievous behavior with actual or perceived anonymity. So we have a blurred moral line that is not quite so obvious as having crossed the line with one's first murder or theft. And we have a distance between the crime and the perpetrator which deadens the psychological gain or satiation.
I think all of this leads to an unpredictable sense of profiling hackers to any degree.
All said, I think there are situations where "hackers" can be profiled, but I don't think there will be any sort of degree of accuracy involved in such a profile. Certainly not enough to waste my time with. (It is possible my hacker score just went up by ending a sentence with a preposition; an obvious thumbing to the authority of good grammar!)
by michael 09.29.08 at 4:02 PM in /general
Sometimes I like to make lengthy Dre-like comments in other people's blogs. It sucks to lose those over time, so I'm trying to do more re-posting of them here. This post is one of those.
Rich Mogull posted about turning off security controls for parties that scan your stuff
, i.e. PCI requirements. Rothman also picked this up in talking about what vulnerabilities he should care about
. Please read their posts and the comments to them and join in the discussion on their blogs. :)
This was my response to Mogull:
Oh, how to respond to anything...post on my blog, or make long posts here? I'll do both! Hopefully I can stay under the length of Dre's comments. ;) Wait, did I see "masturbation" in there somewhere when I skimmed through the first time? O_o
Oh, and read to the bottom where I bring SCADA into this. ;)
There are a few points I want to address:
1) Turning off things like IPS for vendor scanning.
2) The futility of things like IPS/WAF
3) When is a vulnerability something I care about?
1) I agree with turning things like your security controls off for scans. First of all, I'd want to know what is underneath those controls. Hell, I'd like to do a scan with them off and another with them on so I can fill in those comment boxes for coutnermeasures implemented! But really, I find little qualm about making exceptions for scans if that gives me some valuable information. The caveat would be that those exceptions are documented, surgical, and time-limited.
Let's say you're a security professional. Someone asks you to evaluate their system. You want as much visibility as possible to make a proper assessment. The same holds true for doctors, lawyers, physical security agents, baseball coaches. They all need deep access to maybe even your darkest secrets, otherwise their job is impeded. And I do find value in giving experts those deep secrets.
I would disagree that an external scan is really all about what an attacker sees, especially since a) I don't give a shit about who scans me or how often (ok, there is some value there, but not enough to interrupt my gaming sessions) and b) I can't predict what an attacker wants to see. Sure, I want to know how limited a view an attacker can get of my systems, but does that actually guarantee anything? It just guarantees I'll waste my time and/or miss something on the periphery.
2) I agree with the above sentiments about IPS/WAFs, etc. They mean well, and when someone is dedicated to making them work and babysitting them, I think they have value. But let's face it, people don't babysit them. I am in charge of my company's IPS devices, but god knows I only look at the logs once in a blue moon. It pains me, but...such is the problem with not being dedicated solely to security. So, is that really giving me added value? Not really. In fact, most of the value I afford it is with the logging and detecting, not the preventing.
Dre and others are correct. We have far more important and "easier" things to worry about than deeply inspecting our DMZ traffic. I wish we could worry about that stuff, but there are far bigger issues leading to compromises and bad press. (Then again, this is a natural extension of the resistence people have to us fixing their bigger issues, so we fall back into what we do control without violent pushback...the network and traffic.)
3) (Hopefully Rothman approves my comment on his post today on this topic!) The bottomline is that I care about vulns that are underneath my security controls. I want to know that my controls are not just wasted, and I want to know when I have some soft internal parts that need to be specifically protected. I also want to know them so that I can make proper remediation decisions and evaluate hypotheticals properly. If I have server B that is internal but has a vulnerability, I want to know that in case someone in control of server G can laterally attack it once inside my network. Sure, it might be game-over already, but ultimately at some point I have to answer the question of, "How far did attacker G get, or where could he have gotten?"
I don't want to be the one to stand in front of my boss and explain that I didn't know about vuln X in server B just because I made what is now a bad assumption about the risk of server B.
I think SCADA can be a poster-child to this idea. :)
by michael 09.30.08 at 8:08 AM in /general
I know I sometimes miss these, so I like to point them out for others. (In)secure 18 (October)
(pdf) has been released.
One minor complaint I have: Seeing too many articles written by executives of companies that stand to benefit from the article written. Ugh. I know they stand in a unique position to be more expert than I, but I'd rather hear about such things as the need for single-signon from someone who is not vested in an SSO company.
by michael 09.30.08 at 9:47 AM in /general
I often point to links I find interesting or meaningful. Dan Kaminsky recently posted an excellent essay
(since I get sick of using "blog" and "post" to describe these) discussing a great deal of things about our current digital environment. I highly suggest giving it a thorough read. This is one of those sit back and reflect sorts of essays, and figure out where you stand and how to possibly move forward in an informed manner.
IT departments are always in a bind. They’re responsible for anything that goes wrong on the network, but every restriction, every alteration they make in people’s day to day business, carries with it a risk that users will abandon the corporate network entirely, going “off-grid” in search of a more open and more useful operating environment. You might scoff, and think people would get fired for this stuff, but you know what people really get fired for? Missing their numbers.
by michael 09.30.08 at 10:44 AM in /general
Here's my situation today. Our web development team has opted to purchase a third-party product to fill a need rather than build their own. This is a hosted web product that goes on my servers. It was not made with "enterprise" in mind, if you ask me. It can't be load-balanced, has hokey internal management (the devs can essentially restart asp at will), doesn't neatly plug into our existing hosted apps, and has never been evaluated for security.
Today I got the request to make the test site/server I stood up for them to be available externally.
For a client presentation next week.
by michael 09.30.08 at 10:52 AM in /general