.: April 2008 Archives
In preparation for taking the OSCP training
from Offensive Security, I have downloaded and begun to try out BackTrack3 beta
. Some initial thoughts.
- Upon booting from the live cd, my system immediately hopped onto the nearest open wireless network. "Hello neighbor, I didn't know you put this up recently! Thanks for welcoming me right in, don't mind if I do rummage in your cupboards!" This is a deviation from the stealthy approach BT2 took. I hope BT3 will return to the stealthy approach when it moves from beta.
- The permanent hard disk install is not yet automated, although there is an option for it. Hopefully this is fixed, since the steps needed are not many or varied at all. Choose destination, copy files, fiddle with lilo, done!
- Stupid me, I didn't write down my settings from my local BT2 install before wiping it out and installing BT3, so now simple things like monitor mode and kismet don't work. Annoying, but should be simple to fix.
- One BT3 is installed, I see the remote-exploit.org forums have really fleshed out since last I browsed around, and there are a lot of video and text tutorials and people throwing out ideas and such. The wiki is also working out nicely.
As mentioned, I installed it onto the hard disk of a laptop; the same system that has run BT2 for quite some time. I don't need a dual boot setup since I'm an actual geek and have spare systems so I don't have to pretend I use Linux (BackTrack) while really booting into Windows 99% of the time! This wasn't difficult, but it does take about an hour to complete.
After booting into the livecd, the first thing I did was run fdisk /dev/hda1 to remove my existing partitions, then create new ones. The path names can be found under System->Storage Devices in KDE. I then followed some instructions posted on the forum
. There is also a vid (camptasia capture/shockwave)
going through the same steps.
Maybe when BT3 goes out of beta I'll post, for my own future benefit, the actual keystrokes and steps to do an HD install and some intitial configurations to get kismet and injection working, but for now the above links should suffice any of my needs.
by michael 04.02.08 at 1:33 AM in /general
Just want to post a link to an article titled, The Top 10 Security Landmines
by michael 04.08.08 at 11:03 AM in /general
Sometimes you need a little perspective in the business world, mostly to remind yourself that everyone is still human, no matter what their station or salary in life. Even sec geek-related news can offer perspective (e-Discovery).
Seattle is in the midst losing its NBA team
, the Seattle Supersonics. The new owners bought the team in 2006 and have maintained that they are operating in good faith with the city of Seattle and simply not able to come to a compromise. The owners want to move the team to, of all places, Oklahoma. Recently obtained emails paint a far different story.
Here is an exchange between Clay Bennett and Tom Ward. Clay Bennett
is now a co-owner of the Supersonics, parks his arse as chairman in a couple places to do with energy, and a previous co-owner of the San Antonio Spurs. Tom Ward
appears to be a billionaire of something or other to do with energy and also a co-owner of the Supersonics.
"Is there any way to move here [Oklahoma City] for next season or are we doomed to have another lame duck season in Seattle?" Ward wrote.
Bennett replied: "I am a man possessed! Will do everything we can. Thanks for hanging with me boys, the game is getting started!"
, a minority owner of the Supersonics (and a CEO blah blah blah also involved with energy) sent this email to Bennett and Ward shortly after purchasing the team:
...McClendon celebrated the news with the subject line: "the OKLAHOMA CITY SONIC BOOM (or maybe SONIC BOOMERS!) baby!!!!!!!!!!"
Of course, if you've ever managed a mail server in any fashion you have certainly seen the lameness that passes through email exchanges. Hell, I'm sure my own missives include plenty of lowbrow sludge. But still, it is always refreshing to see such eloquence from important business people who have more money at their fingertips than I will ever have a chance to have, writing in a way that makes me want to crack open a Busch Light and watch South Park after class with my other hand in my sweatpants.
by michael 04.10.08 at 7:25 PM in /general
I'd been slowly compiling a list of points on the topic of corporate users being allowed administrative rights on their systems. Not that I want users to have such power, but what if it's not your choice? What if it costs more to piss off your users and steal creativity than it does to exert draconian control on their systems? The sort of a topic that goes into what to do in such an environment to tip the scales back in the IT/Sec team's favor.
Seems a similar story has run on InfoWorld
, been Slashdotted
, and mentioned elsewhere
. Nice discussion! Hopefully soon I can tie up my own post, but, being a braindump sort of post it seems never-ending!
by michael 04.11.08 at 12:45 PM in /general
I've finally actually read the article I previously mentioned, IT heresy revisited: Let users manage their own PCs
. While I like the topic and it brings good discussion, the author goes off on too many bad points. In fact, I think the author needs to simply spend some time in an IT department (more than likely the author is a stay-at-home cyber journalist who is king of his 2 computer home network and all-in-one fax-printer...).
I want to start out with a disclaimer that I am sypathetic to both sides of this debate, both on the side of centralized control (both for operations and security) and on user freedom. I can argue this on both sides all day or night.
The author repeatedly uses Google and BP as examples of this empowerment of users, but this is misleading.
Search giant Google practices what it calls "choice, not control," a policy under which users select their own hardware and applications based on options presented via an internal Google tool. The U.K. oil giant BP is testing out a similar notion and giving users technology budgets with which they pick and buy their own PCs and handhelds.
This is a hell of a lot different than opening up employees to truly choosing their own hardware and software. This is still a list approved and likely supported by Google's internal staff.
In this Web 2.0 self-service approach, IT knights employees with the responsibility for their own PC's life cycle. That's right: Workers select, configure, manage, and ultimately support their own systems, choosing the hardware and software they need to best perform their jobs.
Really, they support it? So when they mess it up, they have administrative rights to uninstall and reinstall? Do they have the ability to call the manufacturer and talk through a motherboard that is flaky and get a new one sent out? I'd have to call dubious on that. Sure, they can choose their software from a list of options, but that's still not truly the freedom many workers are looking for in managing their own workstation. If they can't put on Yahoo toolbar, Google toolbar, 3 different IM systems, and 4 screensavers of their choice (yes, people still do that!), then it's not the freedom they're often wanting. The author is misrepresenting this group, or poorly defining the group (more on that later!).
All too often, IT groups write and code policies that restrict users, largely based on a misbegotten belief that workers cannot be trusted to handle corporate data securely, said Richard Resnick, vice president of management reporting at a large, regional bank that he asked not be identified. "It simply doesn't have to be this way," Resnick said. "Corporations could save both time and money by making their [professional] employees responsible for end-user data processing devices."
I can't outright agree with these sentiments. There are plenty of instances where employees shouldn't be trusted with such data. In my company, we have an email filter that looks for sensitive data such as SSN fields in an Excel spreadsheet being sent. It captures this and turns the email into an "encrypted" email by forcing the recipient to log into an account on our mail server and pick it up. Users don't like this (duh, it's a terrible solution) and we've had one user mask the SSN field just so she could email the document to a client. This user didn't even have any admin rights on her system, but still had the ability to put data at risk to satisfy a task.
People don't think about data security, even if that is spelled out as their responsibility in a policy. Users care about getting their jobs done. While this isn't universal and plenty do act responsibly, we are forced to react to those that don't.
To IT, the glaringly obvious advantages of user-managed PCs are reduced support costs and far fewer pesky help desk calls.
I don't buy this either. Users may have more questions since they all have their own setups and IT staff will need to know a wider array of those options. That or they will turn users away when confronted with unsupported software/hardware, causing frustration.
One thing IT needs to worry about is simply displacing the frustrations that users have.
Such empowerment may move frustration from users not having enough freedom to users having so much freedom that IT can't properly support them. Should users be frustrated with not being able to install their favorite softwares or be frustrated when their PC runs dog slow with all the crap on it? Or will they be frustrated with the array of choices in software and hardware and just want a template for their job? I know many coworkers who would actually be unable to properly choose their own hardware and software to get their jobs done, and feel far more comfortable having it prescribed to them. Sure, the freedom may be fun, but the grass on that side of the fence still tastes like grass after a few chomps.
Google CIO Douglas Merrill concurred. "Companies should allow workers to choose their own hardware," Merrill said. "Choice-not-control makes employees feel they're part of the solution, part of what needs to happen."
Again, I disagree in part. For many workers their job duties do not include maintaining a proper PC system. They want and need IT to take care of that often frustrating piece of their day. We fight this every day in the security field with people claiming security isn't their job. (And I'll argue that they're both right and wrong.) Besides, do you want your employee making sales calls all day, or spending half the day maintaining their system?
"Bottom line: The technology exists," Resnick said, "[But] IT has no interest in it because their management approach is skewed heavily toward mitigation of perceived risks rather than toward helping their organizations move forward."
I've disagreed a lot with this article, but I do realize the problem posed above. I don't think these risks are necessarily perceived risks, but we do have to keep an open mind toward improving employee morale and productivity with computing. If we can peel back control without incurring excessive costs and risks, why not? Are we holding the company back, or are we encouraging innovation and creative solutions?
Sadly, the article continues to pound home that workers should be able to choose their own hardware and systems. This is a hell of a lot different than someone downloading and installing and managing their own software independent of IT entirely.
"I would expect most companies to implement basic security protocols for employee PCs, including virus scanning, spam filters, and phishing filters," Maine's Angell said. "They might provide software tools or simply implement a system check to make sure that such items are running whenever the employee's laptop is connected to the company environment."
Unfortunately, some host-specific security mechanisms will be more useless if users have administrative rights to the systems. IT cannot rely on the host-based firewall to be configured to limit access to network resources (users can just turn it off) or to stop the egress of malicious connections (users can just click allow). A piece of malware run by a user may disrupt such controls immediately. Basically speaking, IT can monitor systems remotely that users control, but can guarantee no level of security. IT no longer owns that piece of hardware, someone else does.
Finally! At the end of the article the author defines the audience he's really been addressing this whole time: users who have some technical proficiency and stake in remaining creative with their problem-solving using their PCs.
The author should really have put this at the front of the article, but instead chose to hold it back until now. Basically stirring the pot with a sensational piece and then limiting it down to something more reasonable at the end, much like trudging 3 blocks in the pouring rain only to arrive at your destination and realize you could have gone one extra block and taken a skywalk the whole way.
by michael 04.11.08 at 2:50 PM in /general
Diving deeper into Snort has been a pet project on my list for some time now. I see Bejtlich
has put up his 14th snort report. I thought I would list links to all 14 here, but why do that when they're all listed on SearchSecurity
by michael 04.14.08 at 1:01 PM in /general
(Disclaimer: Take this post as a week-starting rant, and nothing more. Skip the stricken parts, read the first paragraph, then the bolded part and you'll get the gist. I'm just a terrible editor and hate removing things I've written!)
I'm a bit late to the party, but I finally read a feature article over on BusinessWeek dealing with the Pentagon (and US gov't in general), e-espionage, and email phishing
. The attempt to inject fake emails into the lives of defense contractors and workers reminds me of Mitnick's phone escapades with telecom companies: Sound like you belong there, speak the lingo, establish trust through deception.
This harkens a big change in cyber security on any level. It is no longer about educating about phishing. While this is a good practice, it simply cannot guarantee a level of security. This is a fundamental change in how we do business and interact as humans.
The CISSP and many security fundamentals include the subjects of least privilege and separation of duties. It is important to realize that people will be duped. And if they get duped, what controls are in place to make sure they don't do too much damage? If they authorize a fake order for military weapons, are there any checks or validations that can catch fraudulent activities that are within the bounds of that worker's duties? Are they properly restricted in the access they have to various information? What change control is in place to prevent malicious (or accidental) activity? Will we even know an incident happened?
Other major news lately smacks of these same challenges since we're all behind the curve in really digging down into what really will improve security, not just bandage and work around things. Hannaford had malware on 300 (all?!) internal credit card-processing servers--I still maintain this stinks of an inside job--how the crap did that happen? An insider recently made fraudulent trades, earning him quite a load of money just because he had access and there were lacking controls.
This is a shift from stopping technological threats with technological controls; malware stopped by AV, scan tools stopped by firewalls. This is bleeding into two far more difficult areas: business process and human mistake. It is easy for someone at Geek Squad to belt out AV, HIDS, NIDS, firewalls, spam gateways, and strong passwords as methods to add security. But I think we're at a point where we need to move beyond those levels and get into the real deep stuff, the things that make our brains hurt trying to think about (or organize meetings with the appropriate stakeholders!).
Change control, data access policies, audit, access restrictions, strong authentication, authorizations by committee not just the IT team.. This is the real reason, in my mind, that so many people are clamoring about IT/security aligning with business: our next projects can only be done with the business cooperating.
Ever try change management in the silo of IT? Or auditing, or any of that stuff? And in the absence of those projects, ever try to guarantee security using only technical means that IT is the sole proprietor of? I strongly believe in technological controls and the remarkably high value they have, but I'm also highly sympathetic that those controls alone are not enough, rather just the starting baseline of a strong security foundation.
Then again, I could be barking up a deaf tree. Business is not economically willing to stop all cyber insecurity, otherwise sec geeks wouldn't be unanimous in our yearning for more staff and more budget and more business cooperation. It is still not nearly as economically challenging to business to meet PCI, implement firewalls, HIDS, HIPS, spam filters, and other technological controls.
I could also be way off the green in a sand trap by focusing on senational, one-off media news reports mentioned above. Maybe those are unfortunate incidents that got trumpted on front pages, but are not everyday or every-year happenings. If there's one thing that the media will have in abundance forever are stories about failure. That's life!
by michael 04.14.08 at 1:05 PM in /general
It has been almost 2 years since I changed my job situation up. I was hoping, 2 years ago, to get into a networking or security job when I took up my current role as a Network Analyst. Instead, I found myself back in the hole of Windows web administration and developer support, among many other things some of which does include security. I've been slowly clawing my way out of that area, but now the more senior coworker that managed our company's web environment with me has resigned, leaving me as the sole expert in this area on our team. I've definitely had happier days as I now try to catch up on what he managed while also my own stuff. I was hoping I would get out of here before he did so I could avoid this! :)
So that means I'm even more stuck in web administration (and various other things) for at least another 6 months here. It really does start to cause one to question one's career direction or personal happiness just a wee little bit
On the bright side, I do have more things to look forward to here, such as a Foundstone vulnerability scanning box I have sitting in the corner and a web app firewall/load-balance solution on the way in the next few weeks. And I do have a project to upgrade our host-based firewall solution and assume full control over it. But oh how I wish I could leave the developer/web support behind!
I also received access to my Offensive Security coursework this weekend. The material includes a couple PDFs and a nearly 700MB rar of tutorial videos. I've yet to extract the movies, but I'm really excited they're just a download and I don't have to bother picking them from the server one by one. I also have my access to the virtual labs on their VPN. I'm anxious to start in on learning more about BackTrack 3!
by michael 04.14.08 at 1:58 PM in /terminal23
My mention yesterday of the Offensive Security movie pack didn't properly do it justice. I said there was a nearly 700 MB .rar file of movies. This unpacked to over 100 shockwave/flash movies for a total of
3.4GB 700 MB. There is also a 400+ page lab .pdf file to be used in conjunction with the movies and the VPN connection to the lab network. This could be a little more work/time than I intended! The pdf and movies also have watermarks quite prominently displayed stating my name, email, ID number, and address. That's a nice deterrent for distributing the materials, but I might look into stripping that out of the movie files just because it is a bit of a distraction. When focusing on the terminal windows in the movies, it just seems like poorer quality than it is because the watermarks kinda blur into the background, like a dirty lens or poor resolution. I don't want to give these out to anyone, just clean up the experience. I'll have to read the docs to see if me even doing that is against any rules I've signed.
Update: I obviously can't read folder sizes properly. The movies are just over 700 MB, not 3.4 GB.
by michael 04.15.08 at 8:11 AM in /general
This looks like a fun little project that might run near $100 assuming one needs to get all the parts. The Predator from I-Hacked
essentially extends the range of an open wireless network, rebroadcasting it in a secure mode that you can hop onto. It does this with an external antenna and DD-WRT.
Does this have any uses? Well, I doubt anyone wants to cart this around on a trip, and it certainly looks suspicious in a parking lot. But it might make a decent addition to a wardriving car/truck/van setup. A few years ago this might have been a fun idea to get wireless access while around town, but these days cell phone-to-laptop Internet services and gear seem to be solving this problem. This could obviously be used to surreptitiously connect from a distance to closed wireless networks that you have cracked. Although it might be more useful to just plop the antenna on the laptop and crack/access that way as well.
by michael 04.15.08 at 4:09 PM in /general
A couple interesting papers posted up on SANS reading room
First, "Malware Analysis: An Introduction
." I don't particularly care so much for the introduction part, but I do like the walk-through later in the paper. I like to save paper, so I only printed out what I found interesting: pages 40-63. I should save even more paper and invest in a Kindle or e-book reader... One thing I notice the author didn't use but I would recommend is a snapshot tool run before and after execution of the malware to capture changes in processes, files, and registry entries (Inctrl5 is still a great choice). I know he watches Process Explorer and TCPView, but it can be difficult to read everything in realtime if the malware does a lot. I was surprised there was no mention of Filemon or Regmon either.
Second, "Espionage – Utilizing Web 2.0, SSH Tunneling and a Trusted Insider.
" I didn't think this would be something I'd print out and read, but in quickly scrolling through it, it seems to pack a lot of very technical stuff into a web-borne client-side exploit. I appreciate that! Later in the paper, Ahmed discusses the incident reponse actions of the victim.
I swear I picked these up from McGrew's
blog, but can't find them now. I could be wrong and got them elsewhere...
by michael 04.15.08 at 4:32 PM in /general
I was organizing some old files and came across one of my favorite 22c3 recordings. Tim Pritlove gave a "talk" called The Realtime Podcast
, and I'm amazed I never posted about it. Tim's talk was a realtime podcast on the topic of podcasting. If you can get your hands on the mp4 recording you'll much appreciate it over the low res, reduced audio of the linked Google Video version.
One thing I've noticed on most (all?) podcasts I've listened is they have no background music playing. I find it interesting and somewhat more "focusable" to have the background music that Tim uses. I'd be curious if that would work well for any security podcasters, especially when the levels are controlled.
Tim pimps out DJ L'Embrouille [translated]
in his podcast, a DJ who freely releases
his electronic mixes. His sound ranges from ambient, minimal electronic to more house types of beats; basically stuff I totally dig. The mix Tim sounds like he is using is 2005 Week 38 (MPIIIRadiomix220905)
, although the levels are futzed a bit to reduce the heaviness and drop out much of the bass, I'm sure for podcasting purposes.
Drifting off on a tangent, many mixers put their little tags or snippets in the first few minutes of their mixes, and DJ L'Embrouille often does as well. He uses an almost whispered monologue. I have no idea if he came up with it, spoke it, or where it comes from, but it's an amazing little piece*:
tune in and drop out;
you can't say that;
what I am saying,
happens to be,
the oldest method,
of human wisdom;
find your own divinity,
from social and material struggle;
tune in and drop out
* In doing just a bit more research, I think this piece is a reference if not an audio sample from somewhere of Timothy Leary
who coined the phrase "turn on, tune in, drop out."
by michael 04.15.08 at 11:27 PM in /general
Still haven't purchased some identity theft protection? Perhaps IdentityLoveSock
is for you!
by michael 04.16.08 at 9:46 AM in /general
The SANS Diary has posted a recovered tool that has been used to do mass defacements of websites
. I'm sure this is being posted all over, so I won't wax on it too much. The tool uses a search to find potentially vulnerable sites, then just mass attempts to SQL inject it. It's a sweet, simple little tool and I'm sure there are many, many others out in the wild just like it that simply haven't been recovered or distributed by the author.
Bojan closes the piece with the necessary suggestion for everyone: fix your shit. Run your own scans against your web apps because attackers are already doing it. Kinda reminds me of port scanning your firewall...attackers do it, so should you! You've already lost the battle if attackers have more information than you do, or find that open port (vulnerable input) before you do.
by michael 04.16.08 at 12:57 PM in /general
I've previously mentioned
a web-based SSH tool as a way to access your SSH server through a web browser (and port 80/443).
Another such tool is up: consoleFISH
. I tried it out really quick (I didn't complete the login process), and seems to work nicely. Of course, when using such a tool, assume that everything you type is being read by the web server, including the password. Would I use this? Maybe in an emergency or when accessing an SSH I care nothing about (someone else's!), but not likely for any of mine unless it was through my own web server. I may as well just port forward locally with Putty or use AjaxTerm
on my own server here...
Snagged this from a0002
by michael 04.17.08 at 10:07 AM in /general
I just signed up
. I also embedded a tracker for just my posts over on my right menu bar up near the top.
I've been online a relatively long time now, nearing 15 years, which has included a lot of social stuff (IM, IRC, forums...). Because of this, I'm not terribly quick on utilizing various newfangled social networks. It's a lot of work to maintain a presence, and most of my old stuff still works just fine for me. But Twitter looks interesting and mildly useful, basically a web-based IM system when used with others and a more streamlined, eye-blink, steam-of-consciousness blog/journal type of thing when used alone.
I don't really have ambitions for Twitter beyond logging my own goings-ons that aren't quite blog-worthy, so feel free to invite/abuse/include me in whatever. Never know, I may instead decide half my posts to Twitter are useless to even me, and the rest I could just roll into blog posts... I certainly have that freedom since I have no ambitions with my blog itself (hence no ads or viewer tallies!).
by michael 04.17.08 at 1:44 PM in /general
ran an email from a senior security engineer lamenting his company's ethics in security auditing. Dan Morrill
posted about it, which was my first exposure to it. I posted a comment on his blog, and he sort of lightly guilted me into posting it on my own blog here. Honestly, I had some points in it that I kinda didn't want to just lose to the ether, and instead save them here for myself.
So read Slashdot first, then Dan, then my post will make more sense. I will concede points that say audits really are a bit about negotiating your security value, but I think it needs to be documented. Risk A, mitigating factors B, accepting C...
I know it's a cop out, but I would look for work elsewhere. It's not only a cop out, but also a bit of a cynical approach. But once you drop down this road of fudging the risks/numbers, where do you stop? Where do you re-find that enthusiasm for an industry you're helping to game? What if your name gets attached to the next big incident? What if the exec that got you to bend throws your name out to others looking for the same leeway? Integrity is maybe our most important attribute in security.
I know strong-arming (or outright lying!) happens, it always happens. I think the only way this won't happen is to have a very mature, regulated industry much like insurance or the SEC/accounting/financial space.
Of course, this also means we need to remove or greatly reduce subjective measures and focus on objective ones. Those are the ones we hate: checkboxes and linear values. Those suck to figure out, especially when every single org's IT systems are different. I just don't think that will happen for decades, if that. Unlike the car industry or even the accounting disciplines, "IT" is just too big and broad and has too many solutions to control it.
This leads to one of my biggest fears with PCI. Eventually it will be something negotiated, and the ASVs will be the ones taking the gambles. Lowest price on a rubber stamp PCI compliancy. Roll the dice that while we roll in the money, our clients don't get pwned in the goat...good old economics and greed at work.
This also penalizes the many people who are honest, up front, and deal with the risk ratings in a positive fashion. Sure they may get bad scores, but that means there is room for measurable improvement. There are honest officers and people in this space. But there are also those who readily lie and deceive and roll the dice on security, and those are the ones who will drive deeper regulation and scrutiny.
I'm confused by the post itself. I'm not sure if his company is being strong-armed or if his company is doing the strong-arming.
If his company is being strong-armed, then any risk negotiation should be documented. "We rated item #45 as highly important. Client (name here) documented that other circumstances (listed here) mitigate this rating down to a Medium."
If his company is doing the strong-arming, you might want to just let the senior mgmt do their thing. Ideally, if shit hits the fan, it is the seniors that should be taking the accountability, not others, especially if they've been involved in the decision making processes.
With this line of thinking, there is another thing: the geek factor. As a geek, I tend to know about and inflate the value of very geeky issues. It is often up to senior mgmt or the business side to make decisions on the risks. Sometimes, the decision is made to accept the risk. This means possibly not fixing a hole because the cost is too great, even if there is a movie-plot potential for a big issue. It might be an approach to sit back, take some time and reflect on the big picture a little more. Are these strong-arm tactics covering up truly important things? Or are they simply offending our geek ethic?
One could also weigh in on what would be your proper measure of security? It is always a scale between usability and security, and in the words of the poster, there will always be some scale that involves accepting some risk in order to keep one's job. The alternative is to be so strict about security that you could only get away with that in a three letter agency or contractor thereof!
Ok, after all of that, if the guy wants to keep his job (or not I guess) but yet blow the whistle on such bad practices, I'll have to put on my less white hat and give some tips.
It sucks to do, but sometimes you do have to skip the chain of command and disclose information to someone up above the problem source. I'd only do this after carefully considering the situation and making sure I have an out. Even an anonymous report to a board of directors is better than silently drowning with the rest of the ship.
If there is a bug or vulnerability in an app or web app, get it reported through your reporting mechanisms internally, like a bug system or ticket system. Get it documented. The worst they can do is delete it, at which point you might want to weigh disclosing it publicly somehow... (of course, by that time, they'll likely know it was you no matter how anonymous you make it).
If the company is big enough and the issues simple enough, you might get away with publishing anonymous in something like 2600, the consumerist, or a general blog from a third-party. Sadly, when trying to get people to understand technical risk, it can be difficult to be precise, understandable, and concise. If the guy belongs to some industry organizations (Infragard, ISACA, etc) perhaps leaning on some trusted (or NDA-backed) peers can be helpful.
by michael 04.18.08 at 7:43 PM in /general
An article on CNET about a LendingTree data leak
made me pause for a moment.
Several former employees of LendingTree are believed to have taken company passwords and given them to a handful of lenders who then accessed LendingTree customer data files, the company said.
LendingTree could also face lawsuits from its customers, as well as sanctions from the U.S. Federal Trade Commission, particularly given the potential for identity theft...
I hope that those employees were already "former" when these incidents occurred. That makes life a lot easier. But what if they were still valid employees who gave away their valid passwords to a presumably remotely accessible system (web portal, most likely)? That just sucks. We go from corporate negligence to malicious insider, and that's a world of difference.
This should bring up questions of how to make authentication non-transferable. Or about the need and scope of remote access. Or that we simply can't be perfect and sometimes, especially with malicious insiders, ultimately our only recourse is rigid auditing and alerting.
by michael 04.23.08 at 9:56 AM in /general
Questions/tickets posted to me today remind me how much of a stressor it is to support developers. Typically speaking, developers have very few boundaries in which to solve their problems. Their lack of boundaries turns into my headache when they start finding creative (special ed) solutions to problems. Kinda like kids who want to do something but can't, and they find some unexpected, completely terrible way to do it that causes a hole in the wall.
And sometimes, it's not their solutions that suck, it's the bad initial requirements that suck and really aren't possible in a given architecture without a lot of unnecessary pain, cost, and compromise of security posture. And of course it's my team that gets to be the mean parent...
by michael 04.23.08 at 11:00 AM in /general
So I've spent several days on Twitter, alternating between not watching to being interested in the goings-on.
My impression of Twitter is: IRC+IM+Web.
It is like IRC in meeting new people and hearing new voices, and having your voice heard to others you normally don't interface with directly; like sitting in an IRC channel with 50 others, you can just pipe up with something and get involved. I could have use forum instead of IRC, but forums are threaded and usually slower, while IRC feedback is far quicker and linear.
It is like IM in tracking the people you like to talk to, direct messages, and so on. Unlike IRC where people come and go as they wish (minus your friend lists), IM is far more dependent on you having added them as a friend and vice versa.
And combine that with web accessibility. Companies have long fought against the time-wasters of IM and even chat (ok, fine, IRC is largely blocked because of its prevalence in bot control mechanisms), but people still want IM and chat. Hence, they now use a port 80 web interface to do essentially the same thing. If that is blocked, there are numerous other portals, site plugins, and clients to use to get the access. We're destined to lose battles against cultural trends unless we're an organization that absolutely requires high security.
Also, Twitter is easy to use and enjoy. There aren't a ton of features, which I think is a key to anything "2.0" these days. I know, all sec pros should know how to use IRC and various chat clients (you're old/middle school, right?), but the reality is not everyone has ever fired up non-web-based IRC before. So, this makes the IRC chat part of the equation much more accessible.
It is definitely not bad, and I do enjoy it, especially since I don't get to use IM or IRC at work. Now, I can only join one public group of people in my Twitter club, but I can register other names for my other circles of buddies if I had any. :) I could even have a work name and a group with just coworkers about what we're doing or where we are.
by michael 04.23.08 at 1:22 PM in /general
(IN)SECURE Magazine Issue 16
is now available. Reminded about this by the ServerGuys
by michael 04.23.08 at 2:13 PM in /general
continues some talk about the merits of (or lack thereof) "defense in depth" (DiD). He is not sold on DiD as a core principle for security design. Which I think is perfectly fine! Even though I believe in the value of DiD, it might not always apply in every situation.
Three things to start any DiD discussion:
1) Thomas quotes Eric about my first point: "But Eric also associates ‘depth’ with network security, not application security..."
I think Eric is somewhat correct. Any discussion on DiD should start with where we're framing the discussion, application, network, other...
2) I've mentioned before about security religions
. There is a group who does not accept anything but truly secure "stuff." Incremental or DiD principles need not apply. There is no use in arguing about DiD to someone who believes heavily in the absoluteness of security measures. These would be black and white people: either it is secure or not. Don't argue DiD with someone who fanatically believes in absolute security; DiD is absolutely worthless to them.
3) How do you define DiD? I know of two different definitions. First, DiD refers to layers of defense overlapping to cover deficiencies in other layers; complementary DiD
. One blanket can cover half your car when it is raining, but a second, different blanket overlapping the first one can cover the rest of your car. Second, DiD refers to layers that sit like concentric rings. If you break through one, you still have to break through several more; additive DiD
. Without defining our view of DiD, none of our analogies will be appropriate to compare.
I sympathize with the points raised about causing an attacker to take more time/effort to achieve an asset (attrition) and also cause them to trip more alarms in trying to evade everything you've thrown in their way (delay). Notice these don't *stop* an attacker, but they give defenders a chance to react better or avoid a compromise. Does an adhoc military base erect walls such to withstand missiles, tanks, and planes? No, they rely on detection of incoming threats and react to them. Kudos on the point of reaction, though, since many of these attacks are so quick to execute in the cyberverse. But in counter, I'd rather known after the fact than not at all.
Some comments paint what I think is a realistic vision of DiD.
One comment mentions that DiD is all about economics. This is more increasingly being called risk management.
If you have layered defense where an attacker uses his known parlor tricks to get into the outer crust, but has to spend a lot of time and energy to get any farther because he's not as knowledgable about other techniques, the risk of him bullishly continuing to try may be small.
Another comment mentions DiD should not be "an alternative to rooting out and fixing vulnerabilities."
Very true, but again this comes down to economics. It also seems to be the driving point behind WAFs. Rather than fix the code (which can be costly), just throw up a WAF and not bother fixing something that can be bandaged.
Complexity vs Security vs Economics...
by michael 04.23.08 at 3:01 PM in /general
How do you know your laptop users aren't using their cell phone connection to access the Internet around your firewall while at work?
by michael 04.25.08 at 2:40 PM in /general
Microsoft Windows and IIS have long been a whipping boy for security issues. If you hadn't noticed, they're back in the spotlight, only not quite as loudly because of the technical nature of recent issues. But this year is different. Instead of Microsoft standing alone, web developers
are strapped to the stocks as well.
Microsoft has a new security advisory up (April 23rd)
giving vague details of a vulnerability that matches details provided by Cesar Cerrudo at HITBSecConf2008
. It sounds like
this is less an issue with external hackers and more an issue with trusting your developers, the ones who provide code that could possibly exploit this issue. The workarounds are a bit annoying as posted currently. I think every Windows admin has experienced angst when changing accounts that services or pools run under, and we all do so only if necessary (and cross our fingers that nothing breaks too badly). And disabling MSDTC (COM+) when your apps that run your business use COM+ is not an option. (Microsoft may as well tell us to turn off the web server and unplug the machine!) I think I would be more concerned if I were a larger hosting provider running on Windows...
The above issue does not affect Vista or Windows Server 2008, it appears.
This is paired up with a recent large scale of SQL injection attacks. Microsoft (and many others) rightly point the blame to developers and coding practices
. The OS and even the coding environment can only go so far to protect against incompetent, ignorant, or rushed developers. The rest is up to the developers and those leading the developers.
Attackers continue to move up the layers.
by michael 04.29.08 at 8:32 AM in /general