starting the offensive security coursework

My mention yesterday of the Offensive Security movie pack didn’t properly do it justice. I said there was a nearly 700 MB .rar file of movies. This unpacked to over 100 shockwave/flash movies for a total of 3.4GB 700 MB. There is also a 400+ page lab .pdf file to be used in conjunction with the movies and the VPN connection to the lab network. This could be a little more work/time than I intended! The pdf and movies also have watermarks quite prominently displayed stating my name, email, ID number, and address. That’s a nice deterrent for distributing the materials, but I might look into stripping that out of the movie files just because it is a bit of a distraction. When focusing on the terminal windows in the movies, it just seems like poorer quality than it is because the watermarks kinda blur into the background, like a dirty lens or poor resolution. I don’t want to give these out to anyone, just clean up the experience. I’ll have to read the docs to see if me even doing that is against any rules I’ve signed.

Update: I obviously can’t read folder sizes properly. The movies are just over 700 MB, not 3.4 GB.

a personal divergence and offensive security materials

It has been almost 2 years since I changed my job situation up. I was hoping, 2 years ago, to get into a networking or security job when I took up my current role as a Network Analyst. Instead, I found myself back in the hole of Windows web administration and developer support, among many other things some of which does include security. I’ve been slowly clawing my way out of that area, but now the more senior coworker that managed our company’s web environment with me has resigned, leaving me as the sole expert in this area on our team. I’ve definitely had happier days as I now try to catch up on what he managed while also my own stuff. I was hoping I would get out of here before he did so I could avoid this! 🙂

So that means I’m even more stuck in web administration (and various other things) for at least another 6 months here. It really does start to cause one to question one’s career direction or personal happiness just a wee little bit

On the bright side, I do have more things to look forward to here, such as a Foundstone vulnerability scanning box I have sitting in the corner and a web app firewall/load-balance solution on the way in the next few weeks. And I do have a project to upgrade our host-based firewall solution and assume full control over it. But oh how I wish I could leave the developer/web support behind!

I also received access to my Offensive Security coursework this weekend. The material includes a couple PDFs and a nearly 700MB rar of tutorial videos. I’ve yet to extract the movies, but I’m really excited they’re just a download and I don’t have to bother picking them from the server one by one. I also have my access to the virtual labs on their VPN. I’m anxious to start in on learning more about BackTrack 3!

security 2.0 means technological controls are not enough

(Disclaimer: Take this post as a week-starting rant, and nothing more. Skip the stricken parts, read the first paragraph, then the bolded part and you’ll get the gist. I’m just a terrible editor and hate removing things I’ve written!)

I’m a bit late to the party, but I finally read a feature article over on BusinessWeek dealing with the Pentagon (and US gov’t in general), e-espionage, and email phishing. The attempt to inject fake emails into the lives of defense contractors and workers reminds me of Mitnick’s phone escapades with telecom companies: Sound like you belong there, speak the lingo, establish trust through deception.

This harkens a big change in cyber security on any level. It is no longer about educating about phishing. While this is a good practice, it simply cannot guarantee a level of security. This is a fundamental change in how we do business and interact as humans.

The CISSP and many security fundamentals include the subjects of least privilege and separation of duties. It is important to realize that people will be duped. And if they get duped, what controls are in place to make sure they don’t do too much damage? If they authorize a fake order for military weapons, are there any checks or validations that can catch fraudulent activities that are within the bounds of that worker’s duties? Are they properly restricted in the access they have to various information? What change control is in place to prevent malicious (or accidental) activity? Will we even know an incident happened?

Other major news lately smacks of these same challenges since we’re all behind the curve in really digging down into what really will improve security, not just bandage and work around things. Hannaford had malware on 300 (all?!) internal credit card-processing servers–I still maintain this stinks of an inside job–how the crap did that happen? An insider recently made fraudulent trades, earning him quite a load of money just because he had access and there were lacking controls.

This is a shift from stopping technological threats with technological controls; malware stopped by AV, scan tools stopped by firewalls. This is bleeding into two far more difficult areas: business process and human mistake. It is easy for someone at Geek Squad to belt out AV, HIDS, NIDS, firewalls, spam gateways, and strong passwords as methods to add security. But I think we’re at a point where we need to move beyond those levels and get into the real deep stuff, the things that make our brains hurt trying to think about (or organize meetings with the appropriate stakeholders!).

Change control, data access policies, audit, access restrictions, strong authentication, authorizations by committee not just the IT team.. This is the real reason, in my mind, that so many people are clamoring about IT/security aligning with business: our next projects can only be done with the business cooperating. Ever try change management in the silo of IT? Or auditing, or any of that stuff? And in the absence of those projects, ever try to guarantee security using only technical means that IT is the sole proprietor of? I strongly believe in technological controls and the remarkably high value they have, but I’m also highly sympathetic that those controls alone are not enough, rather just the starting baseline of a strong security foundation.

Then again, I could be barking up a deaf tree. Business is not economically willing to stop all cyber insecurity, otherwise sec geeks wouldn’t be unanimous in our yearning for more staff and more budget and more business cooperation. It is still not nearly as economically challenging to business to meet PCI, implement firewalls, HIDS, HIPS, spam filters, and other technological controls.

I could also be way off the green in a sand trap by focusing on senational, one-off media news reports mentioned above. Maybe those are unfortunate incidents that got trumpted on front pages, but are not everyday or every-year happenings. If there’s one thing that the media will have in abundance forever are stories about failure. That’s life!

misleading article about letting users manage their own pc

I’ve finally actually read the article I previously mentioned, IT heresy revisited: Let users manage their own PCs . While I like the topic and it brings good discussion, the author goes off on too many bad points. In fact, I think the author needs to simply spend some time in an IT department (more than likely the author is a stay-at-home cyber journalist who is king of his 2 computer home network and all-in-one fax-printer…).

I want to start out with a disclaimer that I am sypathetic to both sides of this debate, both on the side of centralized control (both for operations and security) and on user freedom. I can argue this on both sides all day or night.

The author repeatedly uses Google and BP as examples of this empowerment of users, but this is misleading.

Search giant Google practices what it calls “choice, not control,” a policy under which users select their own hardware and applications based on options presented via an internal Google tool. The U.K. oil giant BP is testing out a similar notion and giving users technology budgets with which they pick and buy their own PCs and handhelds.

This is a hell of a lot different than opening up employees to truly choosing their own hardware and software. This is still a list approved and likely supported by Google’s internal staff.

In this Web 2.0 self-service approach, IT knights employees with the responsibility for their own PC’s life cycle. That’s right: Workers select, configure, manage, and ultimately support their own systems, choosing the hardware and software they need to best perform their jobs.

Really, they support it? So when they mess it up, they have administrative rights to uninstall and reinstall? Do they have the ability to call the manufacturer and talk through a motherboard that is flaky and get a new one sent out? I’d have to call dubious on that. Sure, they can choose their software from a list of options, but that’s still not truly the freedom many workers are looking for in managing their own workstation. If they can’t put on Yahoo toolbar, Google toolbar, 3 different IM systems, and 4 screensavers of their choice (yes, people still do that!), then it’s not the freedom they’re often wanting. The author is misrepresenting this group, or poorly defining the group (more on that later!).

All too often, IT groups write and code policies that restrict users, largely based on a misbegotten belief that workers cannot be trusted to handle corporate data securely, said Richard Resnick, vice president of management reporting at a large, regional bank that he asked not be identified. “It simply doesn’t have to be this way,” Resnick said. “Corporations could save both time and money by making their [professional] employees responsible for end-user data processing devices.”

I can’t outright agree with these sentiments. There are plenty of instances where employees shouldn’t be trusted with such data. In my company, we have an email filter that looks for sensitive data such as SSN fields in an Excel spreadsheet being sent. It captures this and turns the email into an “encrypted” email by forcing the recipient to log into an account on our mail server and pick it up. Users don’t like this (duh, it’s a terrible solution) and we’ve had one user mask the SSN field just so she could email the document to a client. This user didn’t even have any admin rights on her system, but still had the ability to put data at risk to satisfy a task.

People don’t think about data security, even if that is spelled out as their responsibility in a policy. Users care about getting their jobs done. While this isn’t universal and plenty do act responsibly, we are forced to react to those that don’t.

To IT, the glaringly obvious advantages of user-managed PCs are reduced support costs and far fewer pesky help desk calls.

I don’t buy this either. Users may have more questions since they all have their own setups and IT staff will need to know a wider array of those options. That or they will turn users away when confronted with unsupported software/hardware, causing frustration.

One thing IT needs to worry about is simply displacing the frustrations that users have. Such empowerment may move frustration from users not having enough freedom to users having so much freedom that IT can’t properly support them. Should users be frustrated with not being able to install their favorite softwares or be frustrated when their PC runs dog slow with all the crap on it? Or will they be frustrated with the array of choices in software and hardware and just want a template for their job? I know many coworkers who would actually be unable to properly choose their own hardware and software to get their jobs done, and feel far more comfortable having it prescribed to them. Sure, the freedom may be fun, but the grass on that side of the fence still tastes like grass after a few chomps.

Google CIO Douglas Merrill concurred. “Companies should allow workers to choose their own hardware,” Merrill said. “Choice-not-control makes employees feel they’re part of the solution, part of what needs to happen.”

Again, I disagree in part. For many workers their job duties do not include maintaining a proper PC system. They want and need IT to take care of that often frustrating piece of their day. We fight this every day in the security field with people claiming security isn’t their job. (And I’ll argue that they’re both right and wrong.) Besides, do you want your employee making sales calls all day, or spending half the day maintaining their system?

“Bottom line: The technology exists,” Resnick said, “[But] IT has no interest in it because their management approach is skewed heavily toward mitigation of perceived risks rather than toward helping their organizations move forward.”

I’ve disagreed a lot with this article, but I do realize the problem posed above. I don’t think these risks are necessarily perceived risks, but we do have to keep an open mind toward improving employee morale and productivity with computing. If we can peel back control without incurring excessive costs and risks, why not? Are we holding the company back, or are we encouraging innovation and creative solutions?

Sadly, the article continues to pound home that workers should be able to choose their own hardware and systems. This is a hell of a lot different than someone downloading and installing and managing their own software independent of IT entirely.

“I would expect most companies to implement basic security protocols for employee PCs, including virus scanning, spam filters, and phishing filters,” Maine’s Angell said. “They might provide software tools or simply implement a system check to make sure that such items are running whenever the employee’s laptop is connected to the company environment.”

Unfortunately, some host-specific security mechanisms will be more useless if users have administrative rights to the systems. IT cannot rely on the host-based firewall to be configured to limit access to network resources (users can just turn it off) or to stop the egress of malicious connections (users can just click allow). A piece of malware run by a user may disrupt such controls immediately. Basically speaking, IT can monitor systems remotely that users control, but can guarantee no level of security. IT no longer owns that piece of hardware, someone else does.
Finally! At the end of the article the author defines the audience he’s really been addressing this whole time: users who have some technical proficiency and stake in remaining creative with their problem-solving using their PCs. The author should really have put this at the front of the article, but instead chose to hold it back until now. Basically stirring the pot with a sensational piece and then limiting it down to something more reasonable at the end, much like trudging 3 blocks in the pouring rain only to arrive at your destination and realize you could have gone one extra block and taken a skywalk the whole way.

letting users manage their own workstations

I’d been slowly compiling a list of points on the topic of corporate users being allowed administrative rights on their systems. Not that I want users to have such power, but what if it’s not your choice? What if it costs more to piss off your users and steal creativity than it does to exert draconian control on their systems? The sort of a topic that goes into what to do in such an environment to tip the scales back in the IT/Sec team’s favor.

Seems a similar story has run on InfoWorld, been Slashdotted, and mentioned elsewhere. Nice discussion! Hopefully soon I can tie up my own post, but, being a braindump sort of post it seems never-ending!

a little bit of personal perspective

Sometimes you need a little perspective in the business world, mostly to remind yourself that everyone is still human, no matter what their station or salary in life. Even sec geek-related news can offer perspective (e-Discovery).

Seattle is in the midst losing its NBA team, the Seattle Supersonics. The new owners bought the team in 2006 and have maintained that they are operating in good faith with the city of Seattle and simply not able to come to a compromise. The owners want to move the team to, of all places, Oklahoma. Recently obtained emails paint a far different story.

Here is an exchange between Clay Bennett and Tom Ward. Clay Bennett is now a co-owner of the Supersonics, parks his arse as chairman in a couple places to do with energy, and a previous co-owner of the San Antonio Spurs. Tom Ward appears to be a billionaire of something or other to do with energy and also a co-owner of the Supersonics.

“Is there any way to move here [Oklahoma City] for next season or are we doomed to have another lame duck season in Seattle?” Ward wrote.

Bennett replied: “I am a man possessed! Will do everything we can. Thanks for hanging with me boys, the game is getting started!”

Aubrey McClendon, a minority owner of the Supersonics (and a CEO blah blah blah also involved with energy) sent this email to Bennett and Ward shortly after purchasing the team:

…McClendon celebrated the news with the subject line: “the OKLAHOMA CITY SONIC BOOM (or maybe SONIC BOOMERS!) baby!!!!!!!!!!”

Of course, if you’ve ever managed a mail server in any fashion you have certainly seen the lameness that passes through email exchanges. Hell, I’m sure my own missives include plenty of lowbrow sludge. But still, it is always refreshing to see such eloquence from important business people who have more money at their fingertips than I will ever have a chance to have, writing in a way that makes me want to crack open a Busch Light and watch South Park after class with my other hand in my sweatpants.

booting up backtrack 3 beta

In preparation for taking the OSCP training from Offensive Security, I have downloaded and begun to try out BackTrack3 beta. Some initial thoughts.

  • Upon booting from the live cd, my system immediately hopped onto the nearest open wireless network. “Hello neighbor, I didn’t know you put this up recently! Thanks for welcoming me right in, don’t mind if I do rummage in your cupboards!” This is a deviation from the stealthy approach BT2 took. I hope BT3 will return to the stealthy approach when it moves from beta.
  • The permanent hard disk install is not yet automated, although there is an option for it. Hopefully this is fixed, since the steps needed are not many or varied at all. Choose destination, copy files, fiddle with lilo, done!
  • Stupid me, I didn’t write down my settings from my local BT2 install before wiping it out and installing BT3, so now simple things like monitor mode and kismet don’t work. Annoying, but should be simple to fix.
  • One BT3 is installed, I see the remote-exploit.org forums have really fleshed out since last I browsed around, and there are a lot of video and text tutorials and people throwing out ideas and such. The wiki is also working out nicely.

As mentioned, I installed it onto the hard disk of a laptop; the same system that has run BT2 for quite some time. I don’t need a dual boot setup since I’m an actual geek and have spare systems so I don’t have to pretend I use Linux (BackTrack) while really booting into Windows 99% of the time! This wasn’t difficult, but it does take about an hour to complete.

After booting into the livecd, the first thing I did was run fdisk /dev/hda1 to remove my existing partitions, then create new ones. The path names can be found under System->Storage Devices in KDE. I then followed some instructions posted on the forum. There is also a vid (camptasia capture/shockwave) going through the same steps.

Maybe when BT3 goes out of beta I’ll post, for my own future benefit, the actual keystrokes and steps to do an HD install and some intitial configurations to get kismet and injection working, but for now the above links should suffice any of my needs.