terry childs found guilty

(Don’t get too upset if you don’t agree with something I say here; I likely won’t get too deeply into the discussion. There is far too high a chance that most discussions consist only of straw man arguments, or even trying to be too general without admitting to exceptions…read the many comments about this case and you’ll see them rife with logical fallacies. Wait, are mainsteam comments anything but? heh!)

The case against Terry Childs has come to an initial close as he has, predictably, been found guilty. I expect that, while guilty, there is still the chance of other grievances that Childs can raise against the city of San Francisco and his superiors and how all of this was handled. At least, I kinda hope so because my continued impression is that Childs is as much a victim as he was the problem, i.e. the victim of absolutely horrible management, both from a technical and a non-technical aspect.

Chief Security Monkey has a nice article with some comments reposted on his blog, which I suggest reading through. Update: This is a great ComputerWorld interview with one of the jurors.

I have a pending comment on that site, but wanted to just record some of my own thoughts here.

Management is fully to blame for this situation, both for horrible policies and for probably conditioning Childs in a way that made this escalation inevitable. These are people who should be banned from ever managing other people ever again. Or even manage anything technical. They obviously don’t get it. It saddens me that while Childs broke the law, these managers won’t get similarly tried and branded.

Childs is, of course, also to blame. He should have just walked away. Or he should have given up the access and taken the blow from management (which likely would have resulted in firing). But I can’t necessarily blame him for leaning into the wind stubbornly. That’s just how some people are. But yes, strictly speaking, he broke a section of penal code, hence I’m not surprised nor much saddened that he was found guilty of that part.

I expect Childs and this whole situation was the product of a very stubborn-to-a-fault (righteous?) admin, failure management, and psychological conditioning.

Yes, that conditioning part is the one where I take a leap of faith, but I expect my leap is not all that large. If, in the past, Childs was either harmed or even blamed for lapses in his network due to someone else’s changes, then I am not at all surprised that this escalated into him refusing to let anyone else into the network. Did he have anything to hide? Doesn’t look like it. Was he trying to hold the city hostage? I didn’t get that impression. Was he trying to make sure it kept running so he wouldn’t get in trouble when some moron took it down and blamed him? Probably. If I held you ultimately responsible that my coffee cup is not spilled over, you’ll probably try to keep everyone away from it to prevent the spilling, especially so if someone spilled it a few days ago when you weren’t looking and I blamed you for it.

But, in the end, while I see lots of idealistic responses and comments about this situation, I think it is far, far, far easier to talk about excrow and continuity than it is to actually walk that walk, both from an administrative but also a managerial perspective. It takes work, knowledge, politicking, and proper people management to even begin to start. And I think far too many people who make comments to that nature, don’t follow their own ideas in practice, both from a godlike administrative access but also for smaller things like inconsequential accounts, processes, systems, programs, scripts, and so on. It is the nature of things that when someone leaves, there is a gap and loss of some information…no amount of planning will truly overcome that with regards to highly skilled or specialized job roles.

But that’s me, and I’m a cynic. 🙂

could you also do this for us?

Adrian (Lane) authored an absolutely awesome article atop the (damn, no more ‘a’ words to use…) latest Securosis friday summary post.

It had started innocently enough…

Yeah, just go read the story! If you’re worked in IT for 6 months or more, you know how this goes, on various levels. From small requests snowballing into larger requests, to creep in network, to “temporary” things becoming permanent things, to how despite how much you strive to do things one way, all it takes it one (even innocent!) person to do it another way and it breaks down consistency…and so on.

southern fried security podcast 10 with darkoperator

Episode 10 of the Southern Fried Security podcast is available and it includes a great discussion with DarkOperator about getting started and getting involved in security. Skip ahead to 13:30 for the start of that discussion. In short, get involved in a positive manner, and if you’re already in security or have some knowledge, contribute and pass it on! Check the podcast out for all the discussion points.

san fran admin terry childs case heading to a decision

The case against Terry Childs, former San Francisco network admin, is hopefully coming to a close soon, and I’m anxious to hear what the jury decides.

I fall on the side of those people who don’t dismiss this case with a hand wave; I think it makes an important statement about management, policies, security, and IT operations.

I’ve been in similar, but far, far smaller, situations where I had to expand access or duties beyond myself to other people. And there are very real times where doing that leads to a degredation in the quality of the work, even up to someone being dumb and bringing down a network or device! I understand his position, even if I wouldn’t have defended it to quite such a degree!

I’ve also seen extremely protective admins whose strangle-hold on their operations starts introducing new avenues of risk, especially in terms of business continuity.

Of course, going too far in the other direction where things are spread out amongst so many other people adds in yet different risks in, well, too many people with God knowledge… Work long enough in IT, and everyone at some point experiences that non-technical manager doing idiotic things just because he has the access…which only conditions the behavior Childs exhibited!

a security serenity prayer from delchi

A week ago I posted about how if security wasn’t hard, everyone would do it. This is quickly becoming my mind’s theme for this spring.

I’d take this a step further as well: If there was some silver bullet, ultimate truth, or Answer for security, we’d have found it already and when we heard it our brains would crack and we’d drop to our knees in all-praising wonder at The Answer.

Alas, there is no Answer.

That’s not to say all discussion is pointless; quite the opposite. We certainly need discussion, but we also should realize that like a function in calculus, we can only approach and draw near to real Answers, not realize them entirely.

It helps to also see a quote from A. P. Delchi posted by Chris Nickerson (which I can’t believe I didn’t re-post on here already!):


grant me the serenity to accept people that will not secure their networks, the courage to face them when they blame me for their problems, and the wisdom go out drinkin’ afterwards!”

There is no answer, but we should still work torwards it as much as we can, but not so much that we can’t step back, respectfully clap each other on the back and have a drink.

the no-answer passionate argument we can’t avoid

Ugh. You know, sometimes in security there are heavy issues you just don’t want to have in front of your face, but then you walk away and come back and see them again, and it instantly brings the pot back to a boil (not an angry boil, just a boil).

That is how I feel when I write and erase and rewrite about articles about Cormac Herley’s [pdf] paper last year. I walked away to lunch, decided not to post, and started closing my windows until I got back to the originator for today: the Boston Globe with this tagline: “You were right: It’s a waste of your time. A study says much computer security advice is not worth following.”. (via Liquidmatrix) Yeah, I knew the moment I saw this paper, that it would make misguided headlines just like this (to its credit, the headline is the worst part, and likely not even written by the author but rather an editor).

It is not so much the article as it is the 120+ comments atttached to it, which lend importance to the topic…most of whom have no idea about the costs involved in building an infrastructure correct the first time versus how pretty much all of them are built today: grown. Over time. Over years. A one-off app written 4 years ago suddenly gets a few late features added which makes it mission critical for 75% of your staff…and so on.

I agree with what Chandler Howell (NewSchoolSecurity) said; actually two things he said. First, the paper seems incomplete, or at least basically tries to monetize the bitching of users, but doesn’t seem to have any idea what to do about it (like so, so, so many other rantsattempts…we get the fact that security has an inverse relationship to convenience…duh!). Second, at the end he mentions making security as transparent to the user as possible. Yes.

Of course, that means tipping the scale between user education vs technological (in this case, what I read as transparent) controls closer to the technological controls side. Larry Pesce also opined (Fudsec) about this in regards to the futility of user education. Perhaps user education does still have a point. The paper makes an attempt to demonstrate that user “stupidity” is a rational behavior. But would user education actually demonstrate why that rational behavior is in fact wrong? (“Rational” is being used in the “justified” sense.) Is it rational for users to open email messages, or should that actually *not* be the rational action when the user knows and accepts that someone from Nigeria probably wouldn’t be emailing them?

Nonetheless, read the comments on the Boston Globe article for the “user” viewpoint. Read the comments on the other articles I posted for security professional opinions. Yes, something is wrong, but I think much of it still has to do with: people making mistakes; economics (which has various influences here!); cost (again, various angles); and how IT does business fundamentally. (Mycurial had a great comment on the Fudsec article) Really, unless security has true demonstratable value to your organization, it *has* to be lagging behind attackers, technology, implementations, and IT in general. (I know, that’s an arguable point!)

Anyway, this is me sharing my growling. 🙂 …and adding another rant! I can rant about people ranting who don’t have any solutions, but I’m answering back with more ranting with no solutions as well. I guess the most I can hope for is some cathartic release!

finding religion through a life-threatening moment

I’ve said it for years, and it continues to be one of my driving “laws” of security: People/organizations care far more after they’ve been violated. Newest case in point, Google*:

“Google is now particularly paranoid about [security],” Schmidt said during a question-and-answer session… After the company learned that some of its intellectual property was stolen during an attack…it began locking down its systems to a greater degree…

This is another reason I believe in penetration testing. Sure, it doesn’t quite yank one’s pants down, drive a kick to the balls, or incite that same sense of dread as a real event would, but it should strive to come as close to that as possible. It’s not just about popping boxes with an exploit, but rather demonstrating that, “I just stole your super secret plans. I just deleted your directory servers. And backups. This will cost you xyz. And I sold the backdoor to the Ukrainians, but not before I joined all your servers to a Chinese botnet and sold all your client data to your closest competitor.”

Shows like To Catch a Thief and Tiger Team (and that one social engineering/con/pickpocketing show…) did a great job in demonstrating issues and conveying a taste of the, “Oh fuck…” moments.

I understand we tend to learn through experience. From not touching an oven until we’ve been burned to not speeding until we’re pulled over to not wrapping up until you have the herps. But we all have the capability to be informed and not make the mistakes in the first place, or seek help in areas we don’t understand (yes, that costs money…).

I may, however, just be an ass about people who can’t (or don’t) think ahead…

* Google is a tough case to use, honestly. They had everything to gain by outing China, outing IE6, and raising their own, “we’re-just-being-a-good-steward,” stock. Still, they’re not unique.

again, why should an organization disclose security breaches?

DarkReading throws out, Organizations Rarely Report Breaches to Law Enforcement. This is a, “Duh,” moment, but I do like reading the reasons given in the article.

Taking this further, I think data breach disclosure is still a lot like the age-old iceberg analogy. Even despite actual laws requiring it, I would bet all the data breaches we hear about are just the visible top of the iceberg. And there are a whole host of other breaches (both known as well as undiscovered ones) that lurk in a huge steaming pile under our field of view.

I firmly believe that many businesses (if not all of them!) have a first reaction to ask, “Is this public yet? How likely is this to be public?” And then to kneejerk on the side of saying nothing and keeping things hush-hush. Of course, until someone finds out, most likely through third-party fraud detection analysis or the finding of files obviously stolen from that organization. I would actually expect (whether I like it or not) that all companies will stay mum when not given extremely huge incentives to disclose (jail time, extreme fines, jeopardizing of business).

Hell, I would even expect this occurs not just in disclosure to the public or to law enforcement, but internal disclosure as well! Tech finds evidence of attackers, tells manager. And somewhere along the chain up, the message gets quelched for fear of one’s job or a naive misunderstanding of the importance of some incidents.

I wonder how many cases Verizon worked on in their DBIR that should be disclosed, but the host company has opted to stay quiet on….or other security firms. Again, I’d bet it’s a decent number. (Note that I’m not trying to criticize Verizon or security firms who are likely under NDA and certainly have given their strong advice, but rather on organizations making the ultimate decisions about security and disclosure. Props to any sec firm that still makes an effort to distribute as much info as they can [formal or informal] to help the rest of us!)

if security wasn’t hard, everyone would do it

I’ve been feeling firsthand the pain of implementing PCI in an SMB for the past 6-odd months. It’s not all that fun in some regards (implementing on-going security in an environment that doesn’t have the time for those tasks). So I try to read opinions on PCI any time I see some.

In futiley catching up on my RSS feeds backlog, I scoured several nice articles from the PCIGuru: pci for dummies, what is penetration testing, and the purpose of penetration testing.

To paraphrase Tom Hank’s character in ‘A League of Their Own’, “There’s a reason security is hard. If it wasn’t hard, everyone would do it.”

Truth. I think it gets even harder the more you avoid having qualified staff add to your security value. You want to automate everything for the checkboxes? You’ll end up spending more and getting less in return, even if you do fill in the checkboxes.

This could lead into the other two articles about pen testing. I am a proponent of pen testing as a necessary piece to a security plan for various reasons. But I also think one reason vuln assessments and pen testing get blurred is because of the limited engagements that many third-party pen testers get thrown into, in terms of time and scope. Give a tester 2-5 days for a network-only test and you really are forcing them to rely decently on automated tools more akin to vulnerability assessments. Granted, you get a lot more, but you also get a lot more for having qualified internal staff always thinking from an attacker’s perspective, who can also do longer and more frequent pen-testing types of duties.

In short, it just comes back down to my continued, deeply-held belief that security begins and ends with talented staff. Just like your software products, financial audits, and sales efforts begin and end with staff appropriate to their duties.

also protecting personal data over work lines

Just a few days ago I read about and mentioned a recent New Jersey ruling about client-attorney communications and storage in temporary files on a computer.

I failed to delve into the idea that possibly, quite possibly, other controls in an organization may be affected, namely traffic captures and web filtering tools, especially if SSL termination is provided with the latter.

new jersey ruling on email privacy at work

This is the kind of story and court-ruling that makes my head spin. Via DarkReading:

In a ruling that could affect enterprises’ privacy and security practices, the New Jersey Supreme Court last week ruled that an employer can not read email messages sent via a third-party email service provider — even if the emails are accessed during work hours from a company PC.

According to news reports, the ruling upheld the sanctity of attorney-client privilege in electronic communications between a lawyer and a nursing manager at the Loving Care Agency.

After the manager quit and filed a discrimination and harassment lawsuit against the Bergen County home health care company in 2008, Loving Care retrieved the messages from the computer’s hard drive [temporary cache files] and used them in preparing its defense.

I’d suggest checking out the ruling itself [pdf].

Some of this sounds fairly obvious, right? But what really raises questions would be laptop users who take their system home or offsite (i.e. away from the shelter of corporate web filtering) and then use it to connect to personal email accounts. Do employees have a reasonable right to privacy for any artifacts that get stored on the system, especially of a protected nature like attorney-client exchanges or perhaps doctor exchanges. If so, do employers have a duty to take extra care of those systems, any backups made, or images made after a termination? Or during technical troubleshooting and such?

Things like this end up resulting in complex policies, especially those designed to protect both business and individual interests. The same kind of policies that get ignored once they get too complicated…