I’m not sure why I’ve not seen this story before, but this is a downright fascinating diamond heist article over on Wired.
a link out to a banking fraud case study
Chief Monkey has linked to an excellent case study in corporate banking fraud. The story takes a few pages to work into the juicier details, but it is worth the burn to get through it.
The network still has a perimeter, but the business and its users have less of a perimeter. If you can check email from any system, than your email password can be snarfed by any of those systems if they’ve been victimized by a drive-by trojan. This can often lead to further attacks, even up to logging into a VPN session from a remote location! People like to think of one-time attacks and siphoning of valuable data, but few think about an attacker looking over your shoulder and reading your emails and data continually.
I wonder if the VP in the story had any personal fraud attacks against her as well, or if the company account was the juicier target. In the end, yes, home users (and their systems and networks) elevate my nervousness considerably.
My only bit of caution would be to anyone who starts crucifying banks too much about their security. There is no measure that will magically protect against fraud. It is entirely a scale between security and usability. Some banks fall low on that scale and get burned (hopefully!) for it. Other banks may slide up the scale too far only to get burned because they’re slowing down, flagging, or outright blocking abnormal but legitimate transactions for important customers. What do you do in those cases? Given different perspectives, I think most people would opt for the least economically costly options from their respective perspectives. Just think about that for a while… People complain about bank security, only up to a point where it inconveniences them too much, then complain more when it still fails, and so on. That’s not a rhetorical game I like to play…(maybe I just like to play a few more moves ahead, I dunno…)
I’m not trying to defend lax, or even negligent, bank security so much as I want to attack overzealous sunday morning security quarterbacking that just perpetuates the problem of a wildly swinging security pendulum that can’t find any peaceful middle ground.
that blogger community experience
Mogull over at Securosis has posted, “Is Twitter Making Us Dumb? Bloggers Please Come Back.” He makes great points on the usefulness of blogging (the great PCI debates are a recent occurrence of “blog debates” spilling into real life), and some of the comments make great points as well, such as how Facebook steals away some of the energy.
Behind on my rss feeds
My own observations are slightly similar, although I admit I’ve had less time these days to keep up with my rss feeds and make interesting posts here. I still troll Twitter and other places, but typically those are not necesarily surrogates to a good blog or even cross-blog discussion, and I typically can participate in Twitter without much actual commitment time and attention-wise.
Maybe we’re all just reading blogs less often, which in turn reduces the emphasis on blogs and our own opportunities to start cross-blog discussions.
Conferences
One area I’ve seen grow considerably in the last couple years is discussion and participation in security conferences. Perhaps all those discussions and talks is tiring, but also serves the same purpose that blog discussions may otherwise have given. Why blog when you’re at a conference having the same discussions every 3 weeks?
Less new faces
I’ve also seen a drop-off on new blogs to follow in the security space. This may be a function of my lacking of time and energy put into reading my rss feeds, and I agree that I tend to gravitate to the same feeds over and over. This doesn’t mean security is dwindling, especially as I’ve talked to plenty of interesting people on Twitter that I didn’t know previously.
It is possible we ask a lot of new faces in security. Where, in the last 4 years, having any content on a “security” blog was enough to get you followers, today do you need to be dropping news, novel new ideas, or 0days every week? I’d hope not. We really need generic discussion as much as or more than the jaw-dropping stuff. But it’s that generic discussion that may be getting satisfied elsewhere.
Look at podcasts and conference roundtables or Twitter discussions or mailing list questions. We still have a huge capacity and energy to talking about the “generic” stuff; even stuff that has no real correct answer, but impassioned opinions on either side. It just seems to be taken to blogs less and less often.
Inherent broken records
“Cloud” notwithstanding, perhaps we just have less interesting topics to talk about. I myself am guilty of this, as I often have ideas tumbling around in my mind, but I’m well aware they’re ideas that not only have *I* had for a while now, but others have had and voiced as well. Security is not a game to win, and we’re going to have some of the same inherent deficiencies for years, decades, to come. You can really only bring them up so many times before you get sick of the obvious.
One other thing I’m guilty of: commenting vs blogging
Every time something like this comes up, I’ll have a minor discussion with myself. Do I make a long-winded comment on someone’s blog to join or initiate discussion (which maybe only he and I will see) or do I post on my blog here under the haughty assumption that my blog is worth their time to read for my viewpoint, or that they’ll even see it?) Or should I engage them more directly rather than wait for them to find my little slice of opinion? How will both of us remember to re-read the comments to see if an update has been made? (This is one reason I tend to have many web browser instances open, some are just open for me to refresh for comment responses!)
This is why I am still partial to being a forum and chat (or, in a sense, Twitter) regular. A forum is essentially a dynamic, central RSS feed of ongoing discussions and blog posts. Unlike blogs where only new topics percolate to the top, hot topics percolate to the top on a forum. And if you have one central place to go for participation, it becomes rather natural (which is also why I suggest less sub-forums).
terry childs found guilty
(Don’t get too upset if you don’t agree with something I say here; I likely won’t get too deeply into the discussion. There is far too high a chance that most discussions consist only of straw man arguments, or even trying to be too general without admitting to exceptions…read the many comments about this case and you’ll see them rife with logical fallacies. Wait, are mainsteam comments anything but? heh!)
The case against Terry Childs has come to an initial close as he has, predictably, been found guilty. I expect that, while guilty, there is still the chance of other grievances that Childs can raise against the city of San Francisco and his superiors and how all of this was handled. At least, I kinda hope so because my continued impression is that Childs is as much a victim as he was the problem, i.e. the victim of absolutely horrible management, both from a technical and a non-technical aspect.
Chief Security Monkey has a nice article with some comments reposted on his blog, which I suggest reading through. Update: This is a great ComputerWorld interview with one of the jurors.
I have a pending comment on that site, but wanted to just record some of my own thoughts here.
Management is fully to blame for this situation, both for horrible policies and for probably conditioning Childs in a way that made this escalation inevitable. These are people who should be banned from ever managing other people ever again. Or even manage anything technical. They obviously don’t get it. It saddens me that while Childs broke the law, these managers won’t get similarly tried and branded.
Childs is, of course, also to blame. He should have just walked away. Or he should have given up the access and taken the blow from management (which likely would have resulted in firing). But I can’t necessarily blame him for leaning into the wind stubbornly. That’s just how some people are. But yes, strictly speaking, he broke a section of penal code, hence I’m not surprised nor much saddened that he was found guilty of that part.
I expect Childs and this whole situation was the product of a very stubborn-to-a-fault (righteous?) admin, failure management, and psychological conditioning.
Yes, that conditioning part is the one where I take a leap of faith, but I expect my leap is not all that large. If, in the past, Childs was either harmed or even blamed for lapses in his network due to someone else’s changes, then I am not at all surprised that this escalated into him refusing to let anyone else into the network. Did he have anything to hide? Doesn’t look like it. Was he trying to hold the city hostage? I didn’t get that impression. Was he trying to make sure it kept running so he wouldn’t get in trouble when some moron took it down and blamed him? Probably. If I held you ultimately responsible that my coffee cup is not spilled over, you’ll probably try to keep everyone away from it to prevent the spilling, especially so if someone spilled it a few days ago when you weren’t looking and I blamed you for it.
But, in the end, while I see lots of idealistic responses and comments about this situation, I think it is far, far, far easier to talk about excrow and continuity than it is to actually walk that walk, both from an administrative but also a managerial perspective. It takes work, knowledge, politicking, and proper people management to even begin to start. And I think far too many people who make comments to that nature, don’t follow their own ideas in practice, both from a godlike administrative access but also for smaller things like inconsequential accounts, processes, systems, programs, scripts, and so on. It is the nature of things that when someone leaves, there is a gap and loss of some information…no amount of planning will truly overcome that with regards to highly skilled or specialized job roles.
But that’s me, and I’m a cynic. 🙂
could you also do this for us?
Adrian (Lane) authored an absolutely awesome article atop the (damn, no more ‘a’ words to use…) latest Securosis friday summary post.
It had started innocently enough…
Yeah, just go read the story! If you’re worked in IT for 6 months or more, you know how this goes, on various levels. From small requests snowballing into larger requests, to creep in network, to “temporary” things becoming permanent things, to how despite how much you strive to do things one way, all it takes it one (even innocent!) person to do it another way and it breaks down consistency…and so on.
southern fried security podcast 10 with darkoperator
Episode 10 of the Southern Fried Security podcast is available and it includes a great discussion with DarkOperator about getting started and getting involved in security. Skip ahead to 13:30 for the start of that discussion. In short, get involved in a positive manner, and if you’re already in security or have some knowledge, contribute and pass it on! Check the podcast out for all the discussion points.
tackling cybernuke scenarios
Renesys has a fun blog post discussing networking “cybernuke” scenarios. Pretty good points, and I like how similar the monoculture of option 3 mirrors that of desktop OS environments. One of the points that resonates with me is any “cybernuke” that someone wants to detonate would certainly disrupt their own experience on the Internet.
san fran admin terry childs case heading to a decision
The case against Terry Childs, former San Francisco network admin, is hopefully coming to a close soon, and I’m anxious to hear what the jury decides.
I fall on the side of those people who don’t dismiss this case with a hand wave; I think it makes an important statement about management, policies, security, and IT operations.
I’ve been in similar, but far, far smaller, situations where I had to expand access or duties beyond myself to other people. And there are very real times where doing that leads to a degredation in the quality of the work, even up to someone being dumb and bringing down a network or device! I understand his position, even if I wouldn’t have defended it to quite such a degree!
I’ve also seen extremely protective admins whose strangle-hold on their operations starts introducing new avenues of risk, especially in terms of business continuity.
Of course, going too far in the other direction where things are spread out amongst so many other people adds in yet different risks in, well, too many people with God knowledge… Work long enough in IT, and everyone at some point experiences that non-technical manager doing idiotic things just because he has the access…which only conditions the behavior Childs exhibited!
a security serenity prayer from delchi
A week ago I posted about how if security wasn’t hard, everyone would do it. This is quickly becoming my mind’s theme for this spring.
I’d take this a step further as well: If there was some silver bullet, ultimate truth, or Answer for security, we’d have found it already and when we heard it our brains would crack and we’d drop to our knees in all-praising wonder at The Answer.
Alas, there is no Answer.
That’s not to say all discussion is pointless; quite the opposite. We certainly need discussion, but we also should realize that like a function in calculus, we can only approach and draw near to real Answers, not realize them entirely.
It helps to also see a quote from A. P. Delchi posted by Chris Nickerson (which I can’t believe I didn’t re-post on here already!):
“GOD,
grant me the serenity to accept people that will not secure their networks, the courage to face them when they blame me for their problems, and the wisdom go out drinkin’ afterwards!”
There is no answer, but we should still work torwards it as much as we can, but not so much that we can’t step back, respectfully clap each other on the back and have a drink.
the no-answer passionate argument we can’t avoid
Ugh. You know, sometimes in security there are heavy issues you just don’t want to have in front of your face, but then you walk away and come back and see them again, and it instantly brings the pot back to a boil (not an angry boil, just a boil).
That is how I feel when I write and erase and rewrite about articles about Cormac Herley’s [pdf] paper last year. I walked away to lunch, decided not to post, and started closing my windows until I got back to the originator for today: the Boston Globe with this tagline: “You were right: It’s a waste of your time. A study says much computer security advice is not worth following.”. (via Liquidmatrix) Yeah, I knew the moment I saw this paper, that it would make misguided headlines just like this (to its credit, the headline is the worst part, and likely not even written by the author but rather an editor).
It is not so much the article as it is the 120+ comments atttached to it, which lend importance to the topic…most of whom have no idea about the costs involved in building an infrastructure correct the first time versus how pretty much all of them are built today: grown. Over time. Over years. A one-off app written 4 years ago suddenly gets a few late features added which makes it mission critical for 75% of your staff…and so on.
I agree with what Chandler Howell (NewSchoolSecurity) said; actually two things he said. First, the paper seems incomplete, or at least basically tries to monetize the bitching of users, but doesn’t seem to have any idea what to do about it (like so, so, so many other
Of course, that means tipping the scale between user education vs technological (in this case, what I read as transparent) controls closer to the technological controls side. Larry Pesce also opined (Fudsec) about this in regards to the futility of user education. Perhaps user education does still have a point. The paper makes an attempt to demonstrate that user “stupidity” is a rational behavior. But would user education actually demonstrate why that rational behavior is in fact wrong? (“Rational” is being used in the “justified” sense.) Is it rational for users to open email messages, or should that actually *not* be the rational action when the user knows and accepts that someone from Nigeria probably wouldn’t be emailing them?
Nonetheless, read the comments on the Boston Globe article for the “user” viewpoint. Read the comments on the other articles I posted for security professional opinions. Yes, something is wrong, but I think much of it still has to do with: people making mistakes; economics (which has various influences here!); cost (again, various angles); and how IT does business fundamentally. (Mycurial had a great comment on the Fudsec article) Really, unless security has true demonstratable value to your organization, it *has* to be lagging behind attackers, technology, implementations, and IT in general. (I know, that’s an arguable point!)
Anyway, this is me sharing my growling. 🙂 …and adding another rant! I can rant about people ranting who don’t have any solutions, but I’m answering back with more ranting with no solutions as well. I guess the most I can hope for is some cathartic release!
finding religion through a life-threatening moment
I’ve said it for years, and it continues to be one of my driving “laws” of security: People/organizations care far more after they’ve been violated. Newest case in point, Google*:
“Google is now particularly paranoid about [security],” Schmidt said during a question-and-answer session… After the company learned that some of its intellectual property was stolen during an attack…it began locking down its systems to a greater degree…
This is another reason I believe in penetration testing. Sure, it doesn’t quite yank one’s pants down, drive a kick to the balls, or incite that same sense of dread as a real event would, but it should strive to come as close to that as possible. It’s not just about popping boxes with an exploit, but rather demonstrating that, “I just stole your super secret plans. I just deleted your directory servers. And backups. This will cost you xyz. And I sold the backdoor to the Ukrainians, but not before I joined all your servers to a Chinese botnet and sold all your client data to your closest competitor.”
Shows like To Catch a Thief and Tiger Team (and that one social engineering/con/pickpocketing show…) did a great job in demonstrating issues and conveying a taste of the, “Oh fuck…” moments.
I understand we tend to learn through experience. From not touching an oven until we’ve been burned to not speeding until we’re pulled over to not wrapping up until you have the herps. But we all have the capability to be informed and not make the mistakes in the first place, or seek help in areas we don’t understand (yes, that costs money…).
I may, however, just be an ass about people who can’t (or don’t) think ahead…
* Google is a tough case to use, honestly. They had everything to gain by outing China, outing IE6, and raising their own, “we’re-just-being-a-good-steward,” stock. Still, they’re not unique.
again, why should an organization disclose security breaches?
DarkReading throws out, Organizations Rarely Report Breaches to Law Enforcement. This is a, “Duh,” moment, but I do like reading the reasons given in the article.
Taking this further, I think data breach disclosure is still a lot like the age-old iceberg analogy. Even despite actual laws requiring it, I would bet all the data breaches we hear about are just the visible top of the iceberg. And there are a whole host of other breaches (both known as well as undiscovered ones) that lurk in a huge steaming pile under our field of view.
I firmly believe that many businesses (if not all of them!) have a first reaction to ask, “Is this public yet? How likely is this to be public?” And then to kneejerk on the side of saying nothing and keeping things hush-hush. Of course, until someone finds out, most likely through third-party fraud detection analysis or the finding of files obviously stolen from that organization. I would actually expect (whether I like it or not) that all companies will stay mum when not given extremely huge incentives to disclose (jail time, extreme fines, jeopardizing of business).
Hell, I would even expect this occurs not just in disclosure to the public or to law enforcement, but internal disclosure as well! Tech finds evidence of attackers, tells manager. And somewhere along the chain up, the message gets quelched for fear of one’s job or a naive misunderstanding of the importance of some incidents.
I wonder how many cases Verizon worked on in their DBIR that should be disclosed, but the host company has opted to stay quiet on….or other security firms. Again, I’d bet it’s a decent number. (Note that I’m not trying to criticize Verizon or security firms who are likely under NDA and certainly have given their strong advice, but rather on organizations making the ultimate decisions about security and disclosure. Props to any sec firm that still makes an effort to distribute as much info as they can [formal or informal] to help the rest of us!)
if security wasn’t hard, everyone would do it
I’ve been feeling firsthand the pain of implementing PCI in an SMB for the past 6-odd months. It’s not all that fun in some regards (implementing on-going security in an environment that doesn’t have the time for those tasks). So I try to read opinions on PCI any time I see some.
In futiley catching up on my RSS feeds backlog, I scoured several nice articles from the PCIGuru: pci for dummies, what is penetration testing, and the purpose of penetration testing.
To paraphrase Tom Hank’s character in ‘A League of Their Own’, “There’s a reason security is hard. If it wasn’t hard, everyone would do it.”
Truth. I think it gets even harder the more you avoid having qualified staff add to your security value. You want to automate everything for the checkboxes? You’ll end up spending more and getting less in return, even if you do fill in the checkboxes.
This could lead into the other two articles about pen testing. I am a proponent of pen testing as a necessary piece to a security plan for various reasons. But I also think one reason vuln assessments and pen testing get blurred is because of the limited engagements that many third-party pen testers get thrown into, in terms of time and scope. Give a tester 2-5 days for a network-only test and you really are forcing them to rely decently on automated tools more akin to vulnerability assessments. Granted, you get a lot more, but you also get a lot more for having qualified internal staff always thinking from an attacker’s perspective, who can also do longer and more frequent pen-testing types of duties.
In short, it just comes back down to my continued, deeply-held belief that security begins and ends with talented staff. Just like your software products, financial audits, and sales efforts begin and end with staff appropriate to their duties.
also protecting personal data over work lines
Just a few days ago I read about and mentioned a recent New Jersey ruling about client-attorney communications and storage in temporary files on a computer.
I failed to delve into the idea that possibly, quite possibly, other controls in an organization may be affected, namely traffic captures and web filtering tools, especially if SSL termination is provided with the latter.
new jersey ruling on email privacy at work
This is the kind of story and court-ruling that makes my head spin. Via DarkReading:
In a ruling that could affect enterprises’ privacy and security practices, the New Jersey Supreme Court last week ruled that an employer can not read email messages sent via a third-party email service provider — even if the emails are accessed during work hours from a company PC.
According to news reports, the ruling upheld the sanctity of attorney-client privilege in electronic communications between a lawyer and a nursing manager at the Loving Care Agency.
After the manager quit and filed a discrimination and harassment lawsuit against the Bergen County home health care company in 2008, Loving Care retrieved the messages from the computer’s hard drive [temporary cache files] and used them in preparing its defense.
I’d suggest checking out the ruling itself [pdf].
Some of this sounds fairly obvious, right? But what really raises questions would be laptop users who take their system home or offsite (i.e. away from the shelter of corporate web filtering) and then use it to connect to personal email accounts. Do employees have a reasonable right to privacy for any artifacts that get stored on the system, especially of a protected nature like attorney-client exchanges or perhaps doctor exchanges. If so, do employers have a duty to take extra care of those systems, any backups made, or images made after a termination? Or during technical troubleshooting and such?
Things like this end up resulting in complex policies, especially those designed to protect both business and individual interests. The same kind of policies that get ignored once they get too complicated…