one man’s creative solution is another man’s forehead on the desk

Questions/tickets posted to me today remind me how much of a stressor it is to support developers. Typically speaking, developers have very few boundaries in which to solve their problems. Their lack of boundaries turns into my headache when they start finding creative (special ed) solutions to problems. Kinda like kids who want to do something but can’t, and they find some unexpected, completely terrible way to do it that causes a hole in the wall.

And sometimes, it’s not their solutions that suck, it’s the bad initial requirements that suck and really aren’t possible in a given architecture without a lot of unnecessary pain, cost, and compromise of security posture. And of course it’s my team that gets to be the mean parent…

when users give their credentials away

An article on CNET about a LendingTree data leak made me pause for a moment.

Several former employees of LendingTree are believed to have taken company passwords and given them to a handful of lenders who then accessed LendingTree customer data files, the company said.

LendingTree could also face lawsuits from its customers, as well as sanctions from the U.S. Federal Trade Commission, particularly given the potential for identity theft…

I hope that those employees were already “former” when these incidents occurred. That makes life a lot easier. But what if they were still valid employees who gave away their valid passwords to a presumably remotely accessible system (web portal, most likely)? That just sucks. We go from corporate negligence to malicious insider, and that’s a world of difference.

This should bring up questions of how to make authentication non-transferable. Or about the need and scope of remote access. Or that we simply can’t be perfect and sometimes, especially with malicious insiders, ultimately our only recourse is rigid auditing and alerting.

a saved comment on security ethics in a questionable situation

Slashdot ran an email from a senior security engineer lamenting his company’s ethics in security auditing. Dan Morrill posted about it, which was my first exposure to it. I posted a comment on his blog, and he sort of lightly guilted me into posting it on my own blog here. Honestly, I had some points in it that I kinda didn’t want to just lose to the ether, and instead save them here for myself.

So read Slashdot first, then Dan, then my post will make more sense. I will concede points that say audits really are a bit about negotiating your security value, but I think it needs to be documented. Risk A, mitigating factors B, accepting C…

I know it’s a cop out, but I would look for work elsewhere. It’s not only a cop out, but also a bit of a cynical approach. But once you drop down this road of fudging the risks/numbers, where do you stop? Where do you re-find that enthusiasm for an industry you’re helping to game? What if your name gets attached to the next big incident? What if the exec that got you to bend throws your name out to others looking for the same leeway? Integrity is maybe our most important attribute in security.

I know strong-arming (or outright lying!) happens, it always happens. I think the only way this won’t happen is to have a very mature, regulated industry much like insurance or the SEC/accounting/financial space.

Of course, this also means we need to remove or greatly reduce subjective measures and focus on objective ones. Those are the ones we hate: checkboxes and linear values. Those suck to figure out, especially when every single org’s IT systems are different. I just don’t think that will happen for decades, if that. Unlike the car industry or even the accounting disciplines, “IT” is just too big and broad and has too many solutions to control it.

This leads to one of my biggest fears with PCI. Eventually it will be something negotiated, and the ASVs will be the ones taking the gambles. Lowest price on a rubber stamp PCI compliancy. Roll the dice that while we roll in the money, our clients don’t get pwned in the goat…good old economics and greed at work.

This also penalizes the many people who are honest, up front, and deal with the risk ratings in a positive fashion. Sure they may get bad scores, but that means there is room for measurable improvement. There are honest officers and people in this space. But there are also those who readily lie and deceive and roll the dice on security, and those are the ones who will drive deeper regulation and scrutiny.

I’m confused by the post itself. I’m not sure if his company is being strong-armed or if his company is doing the strong-arming.

If his company is being strong-armed, then any risk negotiation should be documented. “We rated item #45 as highly important. Client (name here) documented that other circumstances (listed here) mitigate this rating down to a Medium.”

If his company is doing the strong-arming, you might want to just let the senior mgmt do their thing. Ideally, if shit hits the fan, it is the seniors that should be taking the accountability, not others, especially if they’ve been involved in the decision making processes.

With this line of thinking, there is another thing: the geek factor. As a geek, I tend to know about and inflate the value of very geeky issues. It is often up to senior mgmt or the business side to make decisions on the risks. Sometimes, the decision is made to accept the risk. This means possibly not fixing a hole because the cost is too great, even if there is a movie-plot potential for a big issue. It might be an approach to sit back, take some time and reflect on the big picture a little more. Are these strong-arm tactics covering up truly important things? Or are they simply offending our geek ethic?

One could also weigh in on what would be your proper measure of security? It is always a scale between usability and security, and in the words of the poster, there will always be some scale that involves accepting some risk in order to keep one’s job. The alternative is to be so strict about security that you could only get away with that in a three letter agency or contractor thereof!

Ok, after all of that, if the guy wants to keep his job (or not I guess) but yet blow the whistle on such bad practices, I’ll have to put on my less white hat and give some tips.

It sucks to do, but sometimes you do have to skip the chain of command and disclose information to someone up above the problem source. I’d only do this after carefully considering the situation and making sure I have an out. Even an anonymous report to a board of directors is better than silently drowning with the rest of the ship.

If there is a bug or vulnerability in an app or web app, get it reported through your reporting mechanisms internally, like a bug system or ticket system. Get it documented. The worst they can do is delete it, at which point you might want to weigh disclosing it publicly somehow… (of course, by that time, they’ll likely know it was you no matter how anonymous you make it).

If the company is big enough and the issues simple enough, you might get away with publishing anonymous in something like 2600, the consumerist, or a general blog from a third-party. Sadly, when trying to get people to understand technical risk, it can be difficult to be precise, understandable, and concise. If the guy belongs to some industry organizations (Infragard, ISACA, etc) perhaps leaning on some trusted (or NDA-backed) peers can be helpful.

en-twitter-ized aka i joined twitter

I just signed up for Twitter. I also embedded a tracker for just my posts over on my right menu bar up near the top.

I’ve been online a relatively long time now, nearing 15 years, which has included a lot of social stuff (IM, IRC, forums…). Because of this, I’m not terribly quick on utilizing various newfangled social networks. It’s a lot of work to maintain a presence, and most of my old stuff still works just fine for me. But Twitter looks interesting and mildly useful, basically a web-based IM system when used with others and a more streamlined, eye-blink, steam-of-consciousness blog/journal type of thing when used alone.

I don’t really have ambitions for Twitter beyond logging my own goings-ons that aren’t quite blog-worthy, so feel free to invite/abuse/include me in whatever. Never know, I may instead decide half my posts to Twitter are useless to even me, and the rest I could just roll into blog posts… I certainly have that freedom since I have no ambitions with my blog itself (hence no ads or viewer tallies!).

consolefish offers ssh-via-web connections

I’ve previously mentioned a web-based SSH tool as a way to access your SSH server through a web browser (and port 80/443).

Another such tool is up: consoleFISH. I tried it out really quick (I didn’t complete the login process), and seems to work nicely. Of course, when using such a tool, assume that everything you type is being read by the web server, including the password. Would I use this? Maybe in an emergency or when accessing an SSH I care nothing about (someone else’s!), but not likely for any of mine unless it was through my own web server. I may as well just port forward locally with Putty or use AjaxTerm on my own server here…

Snagged this from a0002.

automated sql injector tool recovered

The SANS Diary has posted a recovered tool that has been used to do mass defacements of websites. I’m sure this is being posted all over, so I won’t wax on it too much. The tool uses a search to find potentially vulnerable sites, then just mass attempts to SQL inject it. It’s a sweet, simple little tool and I’m sure there are many, many others out in the wild just like it that simply haven’t been recovered or distributed by the author.

Bojan closes the piece with the necessary suggestion for everyone: fix your shit. Run your own scans against your web apps because attackers are already doing it. Kinda reminds me of port scanning your firewall…attackers do it, so should you! You’ve already lost the battle if attackers have more information than you do, or find that open port (vulnerable input) before you do.

podcast background tunes

I was organizing some old files and came across one of my favorite 22c3 recordings. Tim Pritlove gave a “talk” called The Realtime Podcast, and I’m amazed I never posted about it. Tim’s talk was a realtime podcast on the topic of podcasting. If you can get your hands on the mp4 recording you’ll much appreciate it over the low res, reduced audio of the linked Google Video version.

One thing I’ve noticed on most (all?) podcasts I’ve listened is they have no background music playing. I find it interesting and somewhat more “focusable” to have the background music that Tim uses. I’d be curious if that would work well for any security podcasters, especially when the levels are controlled.

Tim pimps out DJ L’Embrouille [translated] in his podcast, a DJ who freely releases his electronic mixes. His sound ranges from ambient, minimal electronic to more house types of beats; basically stuff I totally dig. The mix Tim sounds like he is using is 2005 Week 38 (MPIIIRadiomix220905), although the levels are futzed a bit to reduce the heaviness and drop out much of the bass, I’m sure for podcasting purposes.

Drifting off on a tangent, many mixers put their little tags or snippets in the first few minutes of their mixes, and DJ L’Embrouille often does as well. He uses an almost whispered monologue. I have no idea if he came up with it, spoke it, or where it comes from, but it’s an amazing little piece*:

turn on,
tune in and drop out;
you can’t say that;
what I am saying,
happens to be,
the oldest method,
of human wisdom;
look within,
find your own divinity,
detach yourself,
from social and material struggle;
turn on,
tune in and drop out

* In doing just a bit more research, I think this piece is a reference if not an audio sample from somewhere of Timothy Leary who coined the phrase “turn on, tune in, drop out.”

malware analysis and incident response sans papers

A couple interesting papers posted up on SANS reading room.

First, “Malware Analysis: An Introduction.” I don’t particularly care so much for the introduction part, but I do like the walk-through later in the paper. I like to save paper, so I only printed out what I found interesting: pages 40-63. I should save even more paper and invest in a Kindle or e-book reader… One thing I notice the author didn’t use but I would recommend is a snapshot tool run before and after execution of the malware to capture changes in processes, files, and registry entries (Inctrl5 is still a great choice). I know he watches Process Explorer and TCPView, but it can be difficult to read everything in realtime if the malware does a lot. I was surprised there was no mention of Filemon or Regmon either.

Second, “Espionage – Utilizing Web 2.0, SSH Tunneling and a Trusted Insider.” I didn’t think this would be something I’d print out and read, but in quickly scrolling through it, it seems to pack a lot of very technical stuff into a web-borne client-side exploit. I appreciate that! Later in the paper, Ahmed discusses the incident reponse actions of the victim.

I swear I picked these up from McGrew’s blog, but can’t find them now. I could be wrong and got them elsewhere…

extending open wireless networks using the predator

This looks like a fun little project that might run near $100 assuming one needs to get all the parts. The Predator from I-Hacked essentially extends the range of an open wireless network, rebroadcasting it in a secure mode that you can hop onto. It does this with an external antenna and DD-WRT.

Does this have any uses? Well, I doubt anyone wants to cart this around on a trip, and it certainly looks suspicious in a parking lot. But it might make a decent addition to a wardriving car/truck/van setup. A few years ago this might have been a fun idea to get wireless access while around town, but these days cell phone-to-laptop Internet services and gear seem to be solving this problem. This could obviously be used to surreptitiously connect from a distance to closed wireless networks that you have cracked. Although it might be more useful to just plop the antenna on the laptop and crack/access that way as well.

starting the offensive security coursework

My mention yesterday of the Offensive Security movie pack didn’t properly do it justice. I said there was a nearly 700 MB .rar file of movies. This unpacked to over 100 shockwave/flash movies for a total of 3.4GB 700 MB. There is also a 400+ page lab .pdf file to be used in conjunction with the movies and the VPN connection to the lab network. This could be a little more work/time than I intended! The pdf and movies also have watermarks quite prominently displayed stating my name, email, ID number, and address. That’s a nice deterrent for distributing the materials, but I might look into stripping that out of the movie files just because it is a bit of a distraction. When focusing on the terminal windows in the movies, it just seems like poorer quality than it is because the watermarks kinda blur into the background, like a dirty lens or poor resolution. I don’t want to give these out to anyone, just clean up the experience. I’ll have to read the docs to see if me even doing that is against any rules I’ve signed.

Update: I obviously can’t read folder sizes properly. The movies are just over 700 MB, not 3.4 GB.

security 2.0 means technological controls are not enough

(Disclaimer: Take this post as a week-starting rant, and nothing more. Skip the stricken parts, read the first paragraph, then the bolded part and you’ll get the gist. I’m just a terrible editor and hate removing things I’ve written!)

I’m a bit late to the party, but I finally read a feature article over on BusinessWeek dealing with the Pentagon (and US gov’t in general), e-espionage, and email phishing. The attempt to inject fake emails into the lives of defense contractors and workers reminds me of Mitnick’s phone escapades with telecom companies: Sound like you belong there, speak the lingo, establish trust through deception.

This harkens a big change in cyber security on any level. It is no longer about educating about phishing. While this is a good practice, it simply cannot guarantee a level of security. This is a fundamental change in how we do business and interact as humans.

The CISSP and many security fundamentals include the subjects of least privilege and separation of duties. It is important to realize that people will be duped. And if they get duped, what controls are in place to make sure they don’t do too much damage? If they authorize a fake order for military weapons, are there any checks or validations that can catch fraudulent activities that are within the bounds of that worker’s duties? Are they properly restricted in the access they have to various information? What change control is in place to prevent malicious (or accidental) activity? Will we even know an incident happened?

Other major news lately smacks of these same challenges since we’re all behind the curve in really digging down into what really will improve security, not just bandage and work around things. Hannaford had malware on 300 (all?!) internal credit card-processing servers–I still maintain this stinks of an inside job–how the crap did that happen? An insider recently made fraudulent trades, earning him quite a load of money just because he had access and there were lacking controls.

This is a shift from stopping technological threats with technological controls; malware stopped by AV, scan tools stopped by firewalls. This is bleeding into two far more difficult areas: business process and human mistake. It is easy for someone at Geek Squad to belt out AV, HIDS, NIDS, firewalls, spam gateways, and strong passwords as methods to add security. But I think we’re at a point where we need to move beyond those levels and get into the real deep stuff, the things that make our brains hurt trying to think about (or organize meetings with the appropriate stakeholders!).

Change control, data access policies, audit, access restrictions, strong authentication, authorizations by committee not just the IT team.. This is the real reason, in my mind, that so many people are clamoring about IT/security aligning with business: our next projects can only be done with the business cooperating. Ever try change management in the silo of IT? Or auditing, or any of that stuff? And in the absence of those projects, ever try to guarantee security using only technical means that IT is the sole proprietor of? I strongly believe in technological controls and the remarkably high value they have, but I’m also highly sympathetic that those controls alone are not enough, rather just the starting baseline of a strong security foundation.

Then again, I could be barking up a deaf tree. Business is not economically willing to stop all cyber insecurity, otherwise sec geeks wouldn’t be unanimous in our yearning for more staff and more budget and more business cooperation. It is still not nearly as economically challenging to business to meet PCI, implement firewalls, HIDS, HIPS, spam filters, and other technological controls.

I could also be way off the green in a sand trap by focusing on senational, one-off media news reports mentioned above. Maybe those are unfortunate incidents that got trumpted on front pages, but are not everyday or every-year happenings. If there’s one thing that the media will have in abundance forever are stories about failure. That’s life!

misleading article about letting users manage their own pc

I’ve finally actually read the article I previously mentioned, IT heresy revisited: Let users manage their own PCs . While I like the topic and it brings good discussion, the author goes off on too many bad points. In fact, I think the author needs to simply spend some time in an IT department (more than likely the author is a stay-at-home cyber journalist who is king of his 2 computer home network and all-in-one fax-printer…).

I want to start out with a disclaimer that I am sypathetic to both sides of this debate, both on the side of centralized control (both for operations and security) and on user freedom. I can argue this on both sides all day or night.

The author repeatedly uses Google and BP as examples of this empowerment of users, but this is misleading.

Search giant Google practices what it calls “choice, not control,” a policy under which users select their own hardware and applications based on options presented via an internal Google tool. The U.K. oil giant BP is testing out a similar notion and giving users technology budgets with which they pick and buy their own PCs and handhelds.

This is a hell of a lot different than opening up employees to truly choosing their own hardware and software. This is still a list approved and likely supported by Google’s internal staff.

In this Web 2.0 self-service approach, IT knights employees with the responsibility for their own PC’s life cycle. That’s right: Workers select, configure, manage, and ultimately support their own systems, choosing the hardware and software they need to best perform their jobs.

Really, they support it? So when they mess it up, they have administrative rights to uninstall and reinstall? Do they have the ability to call the manufacturer and talk through a motherboard that is flaky and get a new one sent out? I’d have to call dubious on that. Sure, they can choose their software from a list of options, but that’s still not truly the freedom many workers are looking for in managing their own workstation. If they can’t put on Yahoo toolbar, Google toolbar, 3 different IM systems, and 4 screensavers of their choice (yes, people still do that!), then it’s not the freedom they’re often wanting. The author is misrepresenting this group, or poorly defining the group (more on that later!).

All too often, IT groups write and code policies that restrict users, largely based on a misbegotten belief that workers cannot be trusted to handle corporate data securely, said Richard Resnick, vice president of management reporting at a large, regional bank that he asked not be identified. “It simply doesn’t have to be this way,” Resnick said. “Corporations could save both time and money by making their [professional] employees responsible for end-user data processing devices.”

I can’t outright agree with these sentiments. There are plenty of instances where employees shouldn’t be trusted with such data. In my company, we have an email filter that looks for sensitive data such as SSN fields in an Excel spreadsheet being sent. It captures this and turns the email into an “encrypted” email by forcing the recipient to log into an account on our mail server and pick it up. Users don’t like this (duh, it’s a terrible solution) and we’ve had one user mask the SSN field just so she could email the document to a client. This user didn’t even have any admin rights on her system, but still had the ability to put data at risk to satisfy a task.

People don’t think about data security, even if that is spelled out as their responsibility in a policy. Users care about getting their jobs done. While this isn’t universal and plenty do act responsibly, we are forced to react to those that don’t.

To IT, the glaringly obvious advantages of user-managed PCs are reduced support costs and far fewer pesky help desk calls.

I don’t buy this either. Users may have more questions since they all have their own setups and IT staff will need to know a wider array of those options. That or they will turn users away when confronted with unsupported software/hardware, causing frustration.

One thing IT needs to worry about is simply displacing the frustrations that users have. Such empowerment may move frustration from users not having enough freedom to users having so much freedom that IT can’t properly support them. Should users be frustrated with not being able to install their favorite softwares or be frustrated when their PC runs dog slow with all the crap on it? Or will they be frustrated with the array of choices in software and hardware and just want a template for their job? I know many coworkers who would actually be unable to properly choose their own hardware and software to get their jobs done, and feel far more comfortable having it prescribed to them. Sure, the freedom may be fun, but the grass on that side of the fence still tastes like grass after a few chomps.

Google CIO Douglas Merrill concurred. “Companies should allow workers to choose their own hardware,” Merrill said. “Choice-not-control makes employees feel they’re part of the solution, part of what needs to happen.”

Again, I disagree in part. For many workers their job duties do not include maintaining a proper PC system. They want and need IT to take care of that often frustrating piece of their day. We fight this every day in the security field with people claiming security isn’t their job. (And I’ll argue that they’re both right and wrong.) Besides, do you want your employee making sales calls all day, or spending half the day maintaining their system?

“Bottom line: The technology exists,” Resnick said, “[But] IT has no interest in it because their management approach is skewed heavily toward mitigation of perceived risks rather than toward helping their organizations move forward.”

I’ve disagreed a lot with this article, but I do realize the problem posed above. I don’t think these risks are necessarily perceived risks, but we do have to keep an open mind toward improving employee morale and productivity with computing. If we can peel back control without incurring excessive costs and risks, why not? Are we holding the company back, or are we encouraging innovation and creative solutions?

Sadly, the article continues to pound home that workers should be able to choose their own hardware and systems. This is a hell of a lot different than someone downloading and installing and managing their own software independent of IT entirely.

“I would expect most companies to implement basic security protocols for employee PCs, including virus scanning, spam filters, and phishing filters,” Maine’s Angell said. “They might provide software tools or simply implement a system check to make sure that such items are running whenever the employee’s laptop is connected to the company environment.”

Unfortunately, some host-specific security mechanisms will be more useless if users have administrative rights to the systems. IT cannot rely on the host-based firewall to be configured to limit access to network resources (users can just turn it off) or to stop the egress of malicious connections (users can just click allow). A piece of malware run by a user may disrupt such controls immediately. Basically speaking, IT can monitor systems remotely that users control, but can guarantee no level of security. IT no longer owns that piece of hardware, someone else does.
Finally! At the end of the article the author defines the audience he’s really been addressing this whole time: users who have some technical proficiency and stake in remaining creative with their problem-solving using their PCs. The author should really have put this at the front of the article, but instead chose to hold it back until now. Basically stirring the pot with a sensational piece and then limiting it down to something more reasonable at the end, much like trudging 3 blocks in the pouring rain only to arrive at your destination and realize you could have gone one extra block and taken a skywalk the whole way.

letting users manage their own workstations

I’d been slowly compiling a list of points on the topic of corporate users being allowed administrative rights on their systems. Not that I want users to have such power, but what if it’s not your choice? What if it costs more to piss off your users and steal creativity than it does to exert draconian control on their systems? The sort of a topic that goes into what to do in such an environment to tip the scales back in the IT/Sec team’s favor.

Seems a similar story has run on InfoWorld, been Slashdotted, and mentioned elsewhere. Nice discussion! Hopefully soon I can tie up my own post, but, being a braindump sort of post it seems never-ending!