pci 2.0: scan your whole network for cardholder data

If anyone has any suggestions on this topic, please comment or tweet or email me!

On page 10 of the PCI DSS v2.0 document, before the actual requirements, there is a section on determining the scope of an assessment, which includes these lines:

The first step of a PCI DSS assessment is to accurately determine the scope of the review. At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope. To confirm the accuracy and appropriateness of PCI DSS scope, perform the following:

  • The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined cardholder data environment (CDE)

The key word in that whole part is that pesky, “should.” Changing that word would make this an unnumbered unrequirement. In my case, my particular QSA has opted to make this a requirement of the scope, i.e. I need to scan my entire network for stray bits of cardholder data.

Let me say I completely agree with this need. There is everything to gain from a scan like this. And not only should it be necessary, but having the ability to perform a scan like this would mean being able to leverage it for other purposes, like client-specific data, porn (conditionally), or anything else hiding in places it shouldn’t.

But, this isn’t a small deal (windows servers, linux servers, file servers, encoded files, databases, workstations, email servers…), and I don’t actually know of any tools to actually do all of this short of buying into a DLP product whose first phase of implementation probably involves exactly this task: scan everything to see what needs protecting. That’s a heavy pill (full DLP licensing cost) to swallow for just one task (the initial scan). I’m actuall quite amazed that DLP providers aren’t yet offering this as a standalone service/product.

I have stuck my fingers into a few tools, and so far none are satisfactory. Disclaimer: I have only done *extremely* limited testing, and have not even begun to tackle the database aspect.

PANBuster recently hit the blog posts, though everyone regurgitates the same old intro blurbs without any real details. PANBuster is a small non-installed exe file that you can run on the command line of a system and it will scan a target file or path for PAN data. The scan is quicker and more lightweight then other options. But the results haven’t been all that exciting as I find more hits with other tools (both false and potential positives). The biggest drawback, however, is the lack of any UNC or network path support. Extreme bummer. Scripting would probably mean interrogating servers for all physical drives, and remote execing the install file. Really messy.

Spider from Cornell (currently Spider4 aka Spider2008) is a tool that can be installed and run from a local GUI, but can also be command line-driven as well. Executing a scan via the command line is a bit tricky, but certainly can be done. Executing a subsequent scan will not succeed when unattended unless you do some magic (ok, you delete the locally saved scan state file) each time. Configuration can be governed by an XML file, but the values are arcane at best (wtf does option 1048 mean?) and not documented. The fat GUI app also is really actually executed even when done by a command line, and then exits out. Any strangeness and it’ll sit there waiting for an operator to click an “Ok” button.

On the plus side, Spider *can* technically be scripted, and I already have a plan of action to do so with PowerShell. It will save hits to a discrete log (the file names and paths, but not the actual hit data; that can be saved in an encrypted local database). It can also scan UNC paths, including admin shares with the proper permissions. That alone is a huge plus.

On the negative side, scans are long, can include tons of hits, has no scan result management at all, and really doesn’t make me feel very warm. I’d expect a month of execution to scan my network, but I’d have to constantly check it to make sure it’s not hung on something.

SENF is a tool from UTexas and I’ve not tried it out extensively yet. Like Spider, it is made for educational institution purposes where the institution holds system users responsible for the data on them, and thus provides the tools plus instructions so they can scan their own systems and send in reports. SENF is written in Java, which doesn’t excite me, and none of the literature appears to support UNC or network-bound scanning of any type. I’ve not gone far enough to actually try it yet. Examples of use are few and far between, and the tool does not come with predefined reg expressions…

Tools like CardRecon and IdentityFinder are commercial tools, but just fill the same need as the above options: scanning a discrete single machine and/or local drives. I’m not about to install an agent or tool on 500+ workstations and 200+ servers if I don’t have to.

DLP solutions pretty much universally tout their first phase of deployment to be automated discovery of sensitive information that then needs protection. I’ve not seen more than limited demos of DLP solutions, so I can’t comment on them, but the capital outlay for something to fill this need is annoying. Still, I’m close to actually going through the motions to get some ideas on how they solve this issue.

Forensics tools like EnCase can also help in this regard, but are expensive and also not specifically tailored for network scanning; again they’re a bit more suited to discrete system scanning.

Questions to peers have yielded zero actionable answers. The end result so far is my own conclusion that no one is actually scanning their whole network to validate their expected scope, and this need has been unfulfilled.

happiness in slavery…I mean, security

I’ve been silently stewing musing over Alan Shimel’s recent post about optimism in security (btw, *love* me some Louis CK!). Then I saw Securosis mention it, and I thought I’d echo some thoughts out.

I could rant a lot about this and make a long post, but not only would I add nothing new, I’m sure I’ve said it all here before anyway, and I agree with both Rothman and Shimel above, for the most part.

What I will say, however, is that optimism/pessimism is a relative thing, and it depends on how you define your happiness. Which in turn depends on how you view your current position in relation to your goals. I think way too often security folks don’t think about their happiness and goals consciously enough. They just want perfect security and solutions and get upset (deeply) when it doesn’t happen, or can’t happen. It’s fine to hit that wall and be frustrated, but you have to accept that that is our reality and not let it define your underlying happiness. Strive for more, but be happy with where you are. There are endless cliches on this sentiment, such as stopping to smell the roses, or life’s a journey, etc.

I for one have no problem going to a conference and bitching, sharing war stories, drinking frustrations away, and being generally pessimist. I’d rather do that than pretend everything is shiny and happy and sit back and pat our own backs. That’s fine, but one approach will more probably result in steps forward while the other is really not going to result in progress. I know that might be conflating Shimel’s point about celebrating our victories and being enthused about how far we’ve come in such a short period of technological change.

My own philosophy on happiness (which is sort of influenced by Randian objectivism, though maybe not too obviously from this simplification): Either you’re happy or you’re not. If you’re not happy, change things to attain that happy state. If you’re unable or unwilling to make those changes, then you *must* change your viewpoint such that you become happy. Take for instance a minivan driver. He wants to drive his minivan like a sports car, but it’s just not built for that, so he’s not happy. He has two options: buy a car that suits his wants, or change his viewpoint to become happy with the minivan, i.e. stop driving like it’s something that it’s not, and enjoy it for what it is and the things it does well. The worst outcome is to do nothing and remain unhappy. More people in security (and in general everywhere) really need to put more conscious thought into their fundamental happiness, which goes deeper than point-in-time moments of celebration and joy.

Personally, the angry pessimistic state of security is comforting and actually does make me happy.

As a parting philosophical shot, I will say just be happy with the world around you right now. Enjoy our progress and enjoy nature at every moment you can.

searchsecurity article on cissp growth vs security value

Via @Mckeay, I read this SearchSecurity article on the problem between CISSP value and security industry growth. Disclaimer: I’m a CISSP-holder.

“I need to find 2 million people in three years to come close to meeting the expected need,” [(ISC)2’s Executive Director] Tipton said in reference to the information security-related job growth his organization forecasts.

I read that and my first reaction was, “That’s not your problem.” *You* don’t need to *find* these job-fillers. *You* need to just keep certifying *qualified* people to hold your certification. There’s an extremely subtle difference there. A difference that isn’t so subtle once it permeates years of efforts and turns things into, well, this currently watered-down certification where I see very basic questions coming from CISSP-holders as well as just plain lack of knowledge and value from many. I hear, constantly, tales of people getting a CISSP just because they need to for maybe a sales role or something. And it’s simply possible to do that, with a book-based test.

Thankfully McKeay actually essentially echoed my sentiments:

“But the CISSP doesn’t really meet that need because it’s not training per se for any particular discipline,” McKeay added. “It’s simply a way of registering people who have learned enough to pass a test, not necessarily learned enough to do a particular job or even be successful.”

I really think this is a problem where greed is a key factor. Where capitalistic growth is the default goal of a business. If you’re not growing revenues and fattening pockets, then you’re failing. A non-profit (yeah right) like ISC2 should *not* actually be interested in growing numbers on any artificial platform or reason. It should be just fine and dandy with maintaining a status quo of incoming cert-holders. If it *needs* to grow revenues, perhaps look into sanctioned training in security topics (though that might put it in direct competition with places like SANS, which is sort of a good thing). But it’s also not like the CISSP needs to gain credibility. It’s *had* that for years, and it’s not quite understanding how that is going to erode itself (much like Microsoft certs).

physical/wireless incidents won’t happen to us!

From the “we’re too small/it won’t happen to us” file (and via infosecnews) comes this article about a crew of cyber-thieves who would break into business wireless networks or even physical buildings to do some digital mischief and steal money. This article seems well-written, and here are some key points I want to highlight:

The indictment accused the men of “wardriving” — cruising in a vehicle outfitted with a powerful Wi-Fi receiver to detect business wireless networks. They then would hack into the company’s network from outside, cracking the security code and accessing company computers and information.

Another way to say it, random guys wardrive and find random wireless networks to attack. And they do so!

In other cases, they would physically break into the company and install “malware” on a computer designed to “sniff out” passwords and security codes and relay that information back to the thieves.

Physically break into a business, and plant malware or other devices to try to get at juicier loot. That’s a pretty big deal and hard to find if you’re not specifically looking for something like that after a break-in.

It also means you have some decently intelligent criminals who aren’t necessarily doing what usually gets thieves caught: liquidating their loot or associations with other criminals. And they also can be pretty random with their attacks while they wardrive. Intelligent, random criminals with few opportunities to get caught until after the fact, are a typical nightmare for LEO.

As this next blurb says, debit cards and online purchases and things that make our lives convenient also make criminal lives convenient:

“Everything that makes it easy for us to do our business online makes it easy for them to commit crimes online,” Durkan said.

I also like this:

At Wednesday’s news conference, representatives from three of the victim businesses explained how they believed their networks were secure and how quickly the thefts occurred.

I really strongly believe all of the victims were small enough to not have a security role in their business, and likely no security interests other than anything learned in consumerland by employees and default physical security from their leasors.

The only way to fix that is continued proactive education and, unfortunately, examples and lessons from other victims. I’m not about to say they need to create a security role or get an in-house security expert, and maybe not even a high-end pen-test, but rather just pick up a local security expert for some verbal consultatation and some technical chops to do small-time assessments and fixes. That’s really all it takes to keep a business from being the easiest target on the block.

Also, don’t skip over the sidebar in the article, which contains some helpful tips. I’m actually a bit surprised by a few of them, as they’re good! (You can, however, skip over the comments, because they’ll make you feel dumber for having read them.

resources for analyzing malicious pdfs

If you want to get a toe into the world of analyzing malicious PDF files, check out this analysis walkthru, including all the various tools and links therein, for a great look. The PDF format is bounded, and really you just need to understand some javascript to figure out what is going on. Clearly, a little bit of scripting knowledge is useful (in the link above, Python) when doing parsing and deobfuscation. Grab some PDF files, analyst away. Add some Javascript to the PDF files, and check those. Then grab some malicious PDF files, and see how they do what they do.

Now, if you *really* want to know what the resultant code does, you’ll need a bit of Assembly/shellcode knowledge, process debugging, and probably access to vulnerability/exploit resources to see common exploits and leveraged vulns. More than likely, you just need to investigate a PDF enough to get some good strings to search for known malware.

Follow links on that blog plus others in the posts to web your way through various other analyses by various other people.

quick look at sept 2011 microsoft security patches

It’s been a while since I shared my monthly Windows patches write-up that I typically do for work, and I probably should just post them, even though they have a heavy slant towards the server side of things, since that’s what I manage. Ok, so this isn’t verbatim, since I scrub some particulars that apply to my company; specifically I mention our risk to each patch as well as list the actual specific updates that I release because they apply or may some day apply to us. Also, I should add the target audience for this is somewhat technical, but not really other server administrators. More like other IT staff and managers. They’re also largely written for my own notes so I know what is being changed in our environment. I pull all actual updates straight from WSUS syncs.

And for the record, the new look of the Microsoft bulletin pages looks lame. Also, one of the very few months we don’t have any IE patches. Strange.

Further information on patches can be found at isc.sans.org or even eeye.


MS11-070 Vulnerability in WINS Could Allow Elevation of Privilege (2571621)
An attacker with a valid login could send a specially-crafted WINS packet to a listening WINS server (loopback interface only) and exploit a local escalation of privilege vulnerability. This update fixes that vulnerability, and should be considered critical to install on any servers with WINS listening.

MS11-071 Vulnerability in Windows Components Could Allow Remote Code Execution (2570947)
This update fixes the way Windows may load nearby malicious DLL files (DLL linking vulnerability) when opening .txt, .rtf, or .doc files over a network share or WebDAV connection. This isn’t a big deal from an external attacker perspective since we block SMB and WebDAV traffic from exiting our network, but this type of vulnerability is still very important if not critical to get patched on systems, partly because of the ubiquitous nature of .txt and .doc files in a typical enterprise network, but also the commonly-held assumption that .txt files are “safe.” The details of this vulnerability were made public this past month. It is interesting that this patches core Windows components and not software that typically reads these files, like Microsoft Office, Wordpad, or Notepad.

MS11-072 Vulnerabilities in Microsoft Excel Could Allow Remote Code Execution (2587505)
This update fixes 5 issues with how Microsoft Excel opens specially crafted files. This update should only apply to a handful of servers that have Microsoft Excel or Office components installed.

MS11-073 Vulnerabilities in Microsoft Office Could Allow Remote Code Execution (2587634)
This update fixes 2 issues in Microsoft Office, one that loads nearby DLL files when opening other files (DLL linking vulnerability), and another that deals with how Office opens specially crafted Word files.

MS11-074 Vulnerabilities in Microsoft SharePoint Could Allow Elevation of Privilege (2451858)
This update fixes 5 issues found in Microsoft SharePoint, all generally affecting the web interface and behavior of a SharePoint installation (XSS, script injection, and file disclosure).


DigiNotar fraudulent root certificate revocations
In the past few weeks, a security incident has been discovered with a Dutch Certificate Authority company, DigiNotar, in which malicious hackers were able to get fraudulent SSL certificates issued. These certificates were issued using widely-trusted DigiNotar root certificates. These updates revoke the trust that Windows (and Internet Explorer) had in place for the affected DigiNotar root certificates. Not trusting these certs should have no impact to us, as we have no relationship to DigiNotar or any of their customers. This largely is a client/workstation sort of update, rather than servers, but does still apply.

for the technically proficient, an article on laptop security

Via Securosis I followed a link to a detailed article on laptop security. I think everyone should read this article, even if you’re not of a mind to go to these technical lengths to protect your device from an attacker. Props to the author for also mentioning browser-borne attacks, as I feel most common users are far more commonly catching their own trojans and keyloggers during their own use than any attacker trying to put one on physically.

The steps themselves may seem over-the-top (they fall in the scope of the article title!), but I definitely have to stop and think that there are people who have an expensive laptop as their only device, and they have work/personal stuff on there that is worth money to them and maybe to other people. Me, I probably would write off a stolen laptop, take mental inventory of what I have lost data-wise, and assume that the thief is not someone looking to steal my identity or leverage my browsing history to start SEing me. Honestly, the chances of that happening (and happening to me!) is exceedingly slim. Not because I’m impervious, but because the “common laptop thief” here in Iowa is just looking for a computer to use or to liquidate as quickly and safely as possible. They’re not going to whip out the cold boot attack or boot-loaded keylogger. (How come we don’t delve into wallet security quite as extravagently as laptops? Or home security?)

I also have multiple devices, and partly because of the need to use them all, I don’t have my important stuff stored in just one place on an easily-stolen device (ok, that’s arguable, but you have to get into my apartment…).

Some of this position is certainly influenced by my enterprise experience. To a business, writing off a laptop expense is nothing compared to the expense of losing a laptop with client-sensitive information stored in the clear on it. Or the loss of the common local admin username/password. Or VPN credentials. The only scalable solution is to make such device loss a simple hardware cost that a business isn’t even going to blink twice about.

I will say, though, I still like the idea of a protected USB key as a complement to laptop devices. And I’ve long since lost any skill I had at creating and maintaining one. */me marks that down as a rainy day project this fall.*

diginotar response, plus ca bcp/dr planning

I have two more thoughts on this whole DigiNotar mess before I hopefully never post about it again.

First, DigiNotar gets breached and trust in their process is broken. We shun them like the lepers they are! Earlier this year, RSA gets breached and trust in their process is (arguably) broken. We wring our hands and wait. The reaction to DigiNotar is not scalable. Sure, it perhaps is the correct approach for various reasons (a- protect yourself, b- give them an economic lesson in the risk of insecurity, c- trust is never “slightly” broken, it’s all broken!…), but it just doesn’t scale to a more important CA or 3rd-party trust provider.

That bothers me. There are lots of innocent victims of DigiNotar who could have done nothing to prevent this issue or better vet DigiNotar. Is that the fault of the people/orgs who shunned DigiNotar, or the fault of DigiNotar? If we, as reasonable security practioners hold fast to the idea that Breach is inevitable, then it’s the fault of the trigger-happy fingers who shunned them, right? Otherwise, why are we placing trust in anything outside our walls at all?

I’m not entirely sure I buy my own arguments yet, but that’d be discussion-for-thought…

Second, I listened to the Cyber Jungle podcast (my first time even hearing about them) specifically to hear the interview of Venafi’s Jeff Hudson who recommends an SSL Certificate breach response plan (keeping in mind his company offers solutions in this space). I was a bit keen to hear what insight someone might have on such a response plan. His plan (min 27:00) takes three general steps/questions (I’m not sure if he’s talking only about SSL certs or more broadly in what he calls your overall 3rd party trust):

1. Who are you using for trust?
2. Where are the certificates?
3. Be ready to replace certificates in response to a problem.

These make sense, but I guess I was already mentally past the first two items and really wanted to hear a strategy for #3. No such luck, and I guess I’m not surprised since that’s really the problem.

At my day job I manage over 100 web sites, most of whom have SSL certificates (to keep this simple). If my CA (Network Solutions) happens to get breached and their roots shunned, in the short term I’m fucked no matter what I do or how much I plan. This is because my domains are hosted by Network Solutions, and I cannot buy a certificate for one of those domains from a different registrar.* I mean, that’s the whole point about making sure certificates are valid! So if tomorrow NetSol is shunned, I have to “quickly” move all my domains elsewhere, and initiate the SSL process. By the way, almost all of my certs are EV SSL certs (yes, I hate them) and they’re not quick to issue, by design. I’d probably have to short-term downgrade them and then field any questions about lack of pretty green colors in the damn address bars.

And that’s just the “simple” 3rd-party trust that is web-borne SSL.

There’s really no BCP/DR plan other than having a pre-existing relationship with another CA that you can migrate to quickly. There’s no high availability, though, and no quick failover. You also need to at least have a few domains/certs on the second provider so that your staff is used to working with them (and they’re used to working with you!), but clearly that increases administrative overhead just a bit.

This gets even worse for those people (not me) who not only use their CA just for domains and certs, but also for their actual hosting. Now there’s a nightmare I don’t want to imagine!

* Strictly speaking, you can do this, but it illustrates and puts further pressure/exposure on a process that is flawed. If I go to an SSL provider and ask them to issue me a cert for a domain hosted by NetSol, their only recourse is to email the publicly listed contact and use that response as the full authorization. This process does not make any reasonable security person feel joyful and has been the source of abuse in the past (we’re talking reliance on automated processes and/or low-on-the-pay-totem-pole customer support).

security elephants aren’t endangered

If you read nothing else each week as far as infosec blogs, always check out the weekly Incite at Securosis and weekly reviews at Infosec Events. Yeah, it’s kinda cheating since both branch out and point elsewhere, but at least it’s not nearly as static a list of links as any of our RSS feeds end up being.

Over on the Incite, I particularly like a piece by Rich Mogull which I’ll blatantly steal and repost here because, well, it’s truth (emphasis is mine):

…But if you want to quickly learn a key lesson, check out these highlights from the investigation report – thanks to Ira Victor and the SANS forensics blog. No logging. Flat network. Unpatched Internet-facing systems. Total security fundamentals FAIL. …Keep in mind this was a security infrastructure company. You know, the folks who are supposed to be providing a measure of trust on the Internet, and helping others secure themselves. Talk about making most of the mistakes in the book! And BTW – as I’ve said before I know for a fact other security companies have been breached in recent years and failed to disclose. How’s that for boosting consumer confidence? – RM

I’ve recently been talking about elephants sitting in our infosec rooms. There’s a lot of them. The first bit I bolded above is one of them, and I really feel that very few organizations get the fundamentals even started, let alone tight (that’s as much a statement of ecnomics reality than a criticism). Still, DigiNotar’s state is pretty egregious.*

But Rich’s point drives home: DigiNotar is a friggin’ security industry company (maybe they forgot that, maybe that should be their mission statement). Yes, utter fail. (Now, back to who audited them in the face of such fail, or who lied to the auditors?)

The second bolded statement is also something I have to reluctantly agree with: Reported incidents are just a tip of the iceberg. And we’re not talking solely executive decision to hush up events for fear of public humiliation, but also middle management and even techies staying quiet about things. I absolutely am not surprised whenever I hear at the bar the inevitable tales from auditors and security folks about incidents that are hidden up or poor security that is hidden with smoke and mirrors.

From top down, this is classic negative conditioning: you get slapped for action X, so you either stop doing or try to hide action X. If you try to stop it, but it costs money that you get slapped for…

* As a bonus discussion, Richard Bejtlich has been talking a lot recently about threat-centric security vs vulnerability-centric security. DigiNotar is clearly an entity that needs to apply threat-centric principles (who are your threats, what do they want that you have?). But can you do that when you’re not even doing the fundamental vuln-centric stuff?**

** For those who’ve played StarCraft II, I could use an analogy for you. Perhaps threat-centric security would work, but I feel like it is definitely a sort of “all in” approach you have to take in order to be effective. There’s no doing some things here, and some things vuln-centric. You’ll just spread your resources too thin and not be good at either side. Sort of similar to multiplayer SC2. You could build a few of every unit, but you’re going to get trounced; you really want to focus all of your efforts on one strategy, and adapt/change only as a reaction to what your opponent is doing. <--There's seriously a big blog post comparison waiting to happen there.

thought: replace diginotar with network solutions or verisign

One point I’ve not effectively made that I should before I stop adding nothing to the discussions about CAs and DigiNotar: scope.

It’s one thing for this to happen to DigiNotar over in the Netherlands. But think about the impact of this if you live in the Netherlands.

Or what if this had been Network Solutions or Verisign or Thawte? And suddenly browser vendors shunned their roots or CNet and other journalists gave your userbase instructions on shunning root certs. Think of the impact to your users if you run websites, to your own users who browse other websites, and your own desire to buy something off Amazon whose cert may now not be trusted for a few days.

I know tons of blogs posts and articles explaining how to block trust (or untrust) DigiNotar roots. But that’s a pretty damaging, somewhat “scorched earth,” approach to addressing the problem.

Besides which, other than an incident currently happening, what reason should Network Solutions be given more trust than DigiNotar? Of the 600 CAs, how do you stratify which are better than others?

tinfoil hats and web of trust chatting

Lots of talk recently about DigiNotar and Iran. I’d posit this problem is more impacting than people think, but not for reasons that are being bandied about. I don’t usually don quite so big a tinfoil hat, but I certainly don’t want to act naive about realistic risks. I’ll try to keep my statements brief, though a bit rambling.

Hypothesis: Iran made legitimate requests of DigiNotar for certificates. This is normal business for a CA. (This may or may not be true at all, but it still stands to illustrate a point.)

Iran cares about intercepting communications for governmental security purposes.

Every dang nation in the world cares about intercepting communications for governmental security purposes, though in some cases we really hope it is with documented procedures and reasons (i.e. like we hope for the US).

Every CA has a way to request any sort of cert you want to aid governmental interception. You really think any CA that does business in country X will be able to still conduct business if they rebuke the host government? No. (Apply this thinking to things like Skype or Google’s portals to request data on people of interest for some precedence.)

The government(s) isn’t going to let there be some completely private global (or even national) means of communication without leaving them the ability to tap into it if needed. I’d posit that this partially explains various not-optimized communications security like CDMA and such.

The web of trust for SSL/CA/web infrastructure is weak, and maybe even broken, but that’s unfortunately part of the (mostly accidental) design, if you ask me. Granted, this was all devised long ago when scale wasn’t a huge concern. Before having 600 CAs in the world that most every browser just inherently trusts because it is good for business because it eases user frustration and efforts (if you run an e-commerce website, just think how awful it will be to work with every user when their browser won’t trust everything inherently). Sadly, it is inherent that a “web of trust” is only as trustworthy as the least trusted part of it, and it only takes one mistake to let that in. Maintaining that trust amongst general public does not outweigh business health/profits

At some point I have to trust something, because I am not smart enough to really be able to intelligently verify my trust in most things encryption. It’s a quandary, certainly.

Getting back to DigiNotar, what’s the best way to cover your ass when someone finds out you’ve been giving shit away to other governments when they force you to or pay you enough? Pre-existing hack proof to give you deniability.

Anyway, that’s one way to look at it. Honestly, I’m sympathetic to typical LEO thinking: the simplest solution is almost always the correct one: someone broke into DigiNotar and issued themselves certs. But I’m also sympathetic to the idea that govs require access, even if the common person thinks they’re communicating securely.