an example of putting the dns bug into perspective

With all this DNS stuff going around, obviously Dan Kaminsky has found something interesting, and the fix is to use random source ports. Now, that might simply mask the real vulnerability by upping the effort needed to leverage it. Or it might simply prevent some other avenue to be popped (someone on FD threw out ICMP responses..). I really don’t know, and am lookig forward to the outing at Black Hat (I won’t be there, but I’ll be waiting and watching from afar).

Halvar Flake has a blog post that can help put this issue into a bit of perspective, at least to the net geeks. He essentially says we shouldn’t have been trusting DNS anyway, so this isn’t a huge thing to worry about. To the rest of the world, unfortunately, that doesn’t necessarily apply quite as nicely…

1. Halvar will tell us we shouldn’t be trusting DNS anyway. The rest of the world does not understand that and will be asking either why we use it, or why we don’t use a secure implementation of it. Of course, at some point somewhere we have to deal with something we can’t trust if we are to interact…

2. C-levels wouldn’t understand it if this bug became weaponized and used to mass-poison servers, preventing them from trading their stocks (or their company’s stocks!). Untrusted or not, they’re affected and that will slide downhill and become our major headache.

solve dns, get on stage with dan!

Yesterday, Dan Kaminsky posted a long post about his latest DNS find. In it, he give some incentive to find the bug before his talk!

Now, if you do figure it out, and tell me privately, you’re coming on stage with me at Defcon. So I can at least offer that

And Dan also has a tool on his site to test your DNS server (it appears to go after the DNS server authorizative to your IP address, i.e. the ISP DNS server. When I run the test regularly, it always tells me “All requests came from the following source port: 35353.”

If I were an ISP, I would do this against my own DNS and then watch the wire to see exactly what tests Dan is probing my server with. 🙂 Some further reading by Joe Stewart on DNS cache poisoning and Dan J. Bernstein on various DNS challenges. And yet more DNS challenges. And another nice paper which includes non-spoofing poisoning attacks (btw, papers with no dates in them fail).

Some questions: Is the bug in the servers or actually in the cache gets poisoned on the clients because of a predictable responses? If on servers, I wonder if it is more vulnerable to local networks as opposed to external attackers? Just some thoughts…

an industry that tries to sell the idea that old tools don’t work

In some random browsing (and ranting!), I ran across a post by Ron Newby which talked about a recent quote from Matasano. Ron reacted to Matasano’s: “Firewalls are underrated, but only by an industry which is perpetually looking at selling you the next new thing.”

I think the point here is simply a paradigm difference between how people get things done. It might also be reflected in that Matasano guys likely use the tools whereas Ron has been on the sales side (or has that background).

If you have to dig a hole, do you expend some energy and use a shovel, or do you rent a backhoe (with covered canopy, internal air venting, a scoop on the other end, new paint job…)?

On one hand, you spend time and effort to do what is maybe a more surgical job with a tool that will almost certainly not fail you. (And you better have the back for it!)

On the other hand, you save time but may have to wait in line to acquire the equipment, maintain it, operate it, and probably lose some surgical ability with a large scoop and machinery between you and the ground being broken.

And of course both choices still require measuring where the hole should be, dealing with the excised dirt, tracking the progress, making sure the direction is clear, etc.

There is merit to either situation such that I would never dispose of one or the other. However, I will never say, “…they are promoting firewalls, which suck, and will always suck, and should be shot…”

What should I have instead of my firewalls?* Some UTM that runs on clunky Java and tries to do 39 different things, none of which is does exceedingly well and only 3 functions I need for a consulting gig at a ma and pa shop (because their marketing teams place more emphasis on number of features rather than useful/valuable features)? Sure, there’s the age-old “security religion” issue where one side will denounce firewalls because they can’t stop everything, but that’s, again, a paradigm and a situational difference. Not a universal right or wrong, value or non-value.

I agree with Matasano that it sucks we keep having vendors push new things on us over and over and end up driving a lot of the security we see in the press today (yay marketing and sales cycles!). I mean, they have reasons to innovate for their own economic gain; not necessarily because the security industry has new needs.** And I will say that just because something is new, does not mean it adds value to me beyond tried and true tools from the past.

* Fine, yes firewalls should be better defined before denouncing or defending them. And yes, firewalls that have no context into application layers 1-7 have less utility.

** There are new things, don’t get me wrong. Old tools won’t protect new stuff like virtualization or newer web 2.0 coding languages or practices, for instance.

yup, limewire is still used to disclose data

A report posted by Brian Krebs at the Washington Post (one of the few major publications whose security reporter I actually enjoy reading most of the time!) further illustrates why the assholes in IT and Infosecurity exert control and policy over end user systems.

Sometime late last year, an employee of a McLean investment firm decided to trade some music, or maybe a movie, with like-minded users of the online file-sharing network LimeWire while using a company computer. In doing so, he inadvertently opened the private files of his firm, Wagner Resource Group, to the public.

The breach was not discovered for nearly six months. A reader of’s Security Fix blog found the information while searching LimeWire in June.

It would be nice to allow employees full use of the web and their systems if it weren’t so risky, eh?

demand will eliminate net neutrality debate over time

I got pointed (via elamb) to an article discussing net neutrality: five facts everyone should know, hosted by the folks at (10gigE Alliance). Net neutrality on Wikipedia.

The concept of net neutrality is an interesting one, especially when you look at the economics of it. It makes sense to limit traffic if you’re a carrier with limited bandwidth/resources (or will someday be limited). But it makes sense to have unlimited traffic if you’re a consumer. Business wants to cut costs; consumers want their freedom of choice.

To emphasize points in the article:

2. Net packaging. Yeah, I think we should really never talk about net packaging ever again. AOL tried this approach with their wall-garden business model. It doesn’t work or suffice for too many users. Likewise, for every site that wants to charge even small prices for content, there are 3 other sites with nearly the same content for free. Or if it is new and has no peers, it will in a year or two when the business model proves unprofitable or too many alternatives appear. Cable companies still tout packages of channels, but this is slowly going away (as slowly as they can make it).

3. Networks are “protecting” consumers. Fine, this is a great marketing point, but the article is correct: any protection is simply a coincidental by-product. And even then, it can’t be all that secure for everyone. Any protections an ISP will provide will be like swatting flies with a sledgehammer. Even non-ISP services like DNS providers or site advisors or email server blacklists are clumsy and end up swatting legit sites in their wide swings.

4. Speed Throttling. I don’t feel this has as much to do with net neurality because it is more a function of speed as opposed to open access or blocked traffic. It’s also something I won’t get into much. I’ll pay what I have to for satisfactory service and move on. Then again, I am weird and couldn’t tell you the price of gas on any given day or week (I don’t check prices, I just fill when I need to, pay it, move on; it’s not important enough to care about until it impacts me such that my habits have to change and I drive less…). As long as I can pay the bills and do what I do on the net, I don’t much care.

Thankfully, despite all the passionate talk about net neutrality, this is a geek’s realm: internet access. There will always be alternative providers that understand geeks and offer good bandwidth without restrictions or delusions of making money off weird implementations (like Mediacom, my cable provider, which hijacks every bad DNS lookup I make in a browser). This is still an economic consumer system that is ultimately going to be ruled by demand, not supply. Sure, there are plenty of consumers who will just do whatever, but they’ll just end up in the next AOL walled-garden of disappointment.

dns server patches coming out

Over on DarkReading I just read up on a finding by Dan Kaminsky that is resulting in a rather huge rollout of DNS server patches from a crazy number of vendors. Seems like someone either hit on a critical issue or, as Ptacek is quoted in the article, an exploit has been developed.

It sounds logical that the issue is related to old issues with spoofing query responses fast enough (and when leveraging recent well-known PRNG issues) and today’s ability to send lots of packets really fast. Bombard a server with specific DNS queries while at the same time spoofing a bombardment of responses to the server that look like they are from an authoritative server, and you might just hit upon a good combination which can poison the DNS cache of that server for a short time. Anyone else making the same DNS request from a poisoned server will be given the bad IP address and get sent to the bad server.

Being able to actually weaponize this would be pretty valuable as users would really never know they were on a bad site unless their browser queries several DNS servers to compare the results or the bad server IP is blacklisted somehow. Calling in to tech support when the site doesn’t work (for instance when the login isn’t accepted) will result in a lot of testing before even possibly hitting upon the problem. Then again, attackers can just make a fake front page and pass the users on to the real site after farming out the login info. Until the accounts are hijacked, no one is going to be the wiser.

Follow the links in the article for more information on the older issues. More info on PRNG vulnerabilities can be found from your local Google site.

devil’s advocate thursday!

Richard Bejtlich has posted recently a comparison of current information security practices to the times of Galileo. Rather than listen to the same old rhetoric and belief, Galileo centered his claims on empirical (measured) evidence. This sounds similar to the concept of “management by fact” (which Bejtlich has posted on previously as well). I think there is a lot of merit in measuring what we do in infosec and then managing by fact.

I do, however, have one minor criticism of this approach, while not actually disagreeing at all with Bejtlich.

Galileo used measurements to shatter beliefs, but many things that seem like beliefs in infosec may well have been at one time or still are based on measurements (the validity of the measurements may be suspect, however!).

Would it be management by belief if 50 companies reported measured success with a password policy, and I simply accepted that conclusion and implemented it? Or that patching within 30 days didn’t help 500 incidents so why bother? Holding too firmly to the Galileo example (management by fact) may end up insinuating that unless you personally have made the measurements, then everything else is belief. But not everyone has a big telescope.

This might be a discussion on the validity of statistics versus facts versus belief versus best practices versus risk…

Galileo benefitted from two things that we do not have. A) Nothing he nor anyone else did would change the a priori truth that the Earth revolved around the Sun. People just had to measure it correctly. B) No one had provided the proper measurements before. At all. We don’t have the assurances of A in infosec, nor are we forging absolutely new ground like B.

Now, while I offer up the above, I don’t say that companies should get away with not measuring their own implementations, not at all! I just don’t want to too stubbornly go down a road that leads to an egocentric security stance that may or may not be right.

Maybe because of A this is a discussion that needs to branch into two directions and not mix the two: macroscopic infosec and microscopic infosec. Macroscopic infosec would deal with large entities and their interactions (ISPs, global security, standards, compliance, or universal practices that everyone should pay attention to). Microscopic infosec may be dealing with what one company implements within its virtual walls, how it measures it, and manages by fact.

think of all the things that could have a kill switch

Bruce Schneier has a new Security Matters article up on Wired. He talks about the growing trend of “kill switches” on various electronic devices.
Definitely not a good idea, but I think at least in the consumer markets, economic forces will keep such products from getting too out of control. For instance, I am still in the casual market for a new portable digital music player, and I won’t be getting an iPod. Basically I don’t trust Apple in conjunction with iTunes and my digital media (not all of which is legal). I want to manage my device out of band, and really never have to worry about DRM or the firmware suddenly making decisions for me.

Bruce is correct in worrying about the chains of authority when you start giving one device power over another. The wider the more dangerous. Maybe Windows should have a killswitch that is remotely accessible? We can bring back teardrop/Ping of Death!