ode to the ciso

Cutaway posits, “Why is it that we have not seen college, high school, or any other school close their doors because of security breaches or just plain being totally owned?”

I’m not going to answer that, but I will say that this is my new ode to ousted CISO/CIOs who lose their positions due to a stupid security breach:

laugh and the world laughs with you,
weep and you weep alone,
for the sad old company must keep hold it’s money
but still has security troubles on its own

This is adapted from a wonderful poem by Ella Wheeler Wilcox called Solitude. (If you like that poem, I highly suggest browsing her other works…)

patch tuesday information sources

For some time now, the ISC has been my first check for information on Microsoft patches from Patch Tuesday. I then follow links to the disclosures on Microsoft’s site and the CVEs for more details.

I see BreakingPoint has gone further and released a slew of indepth looks at the patches and the vulnerabilities those patches, err, patch. I think this is awesome, and fits what is kind of the last piece to getting all the info about Patch Tuesday: overview, official statements, technical analysis. I hope they do this every month.

randomness: passwords, ids, salespeople, defaults, layers

I think every time I call one of my credit card customer service centers, I have the same befuddled response, probably because I only call once every 6 months, if that. “Can I have your password for this account?” Me: “…huh, what? I didn’t know there was a password..” Rep: “It is probably your mother’s maiden name.” Me: “…oh…ok well let’s try this.” And of course it works…it’s just so odd being asked a password on the phone…

I really don’t like having a gap between my use of an IDS/IPS and knowledge of the signatures. Today a new alert came across proclaiming “NETBIOS-SS: Bugbear Virus Worm.” I’m not sure what a “virus worm” is, but it certainly is something to look at right away. Turns out it was a false positive, but I really wish I could see what my vendor’s signatures actually are, rather than seeing the interpretation of them in the management console (which are almost always inconclusive and vague). Oh, since I’m complaining about the IDS/IPS, I’ll echo my old complaint that I really dislike capturing only one packet per alert, even though I have it set to log the stream…one packet certainly gives me a lot of context!

Annoying vendor salespeople #84: Insist on digital communication via email only. Actively reject any attempts at face-to-face or voice-to-voice communication. I think sales people have a handbook that says sales are guaranteed with face-to-face meetings and 80% guaranteed with voice-to-voice meetings. It’s almost like seeing a squirrel stuck inside a gallon milk jug.

What if we start convincing companies to roll out “secure by default” devices and software? Will we dumb down our workforce too much, with people who know how to roll something out but not know how to manage anything? IIS is easy to build now, but takes work to really understand it. Apache still scares IIS users because you need to make config changes early on… Just a thought, although I do believe “secure by default” should be the goal.

I was adjusting a script of mine the other day to account for the event of a configuration error in some file replication apps we run. A config error led to an issue with script execution, so I coded around it before I found the config error. This is effectively a little bit of “defense in depth” although this has nothing to do with security. But what if a config error occurs again? Because I’ve layered my script over the config, it might mask the problem with the config. Can defense in depth mask holes in the various layers because testing isn’t done on each piece? Possibly…

spi dynamics web app hacking workshop

This morning I attended a workshop hosted by Michael Sutton of SPI Dynamics. Michael is the Security Evangelist (kinda like a mix between a trainer and a sales engineer, I think…does that not sound like a cushy role?) for SPI Dynamics, and he talked about hacking web applications. I just need to mention that the blogs and labs on the SPI Dynamics site are both nice resources. The talk had about 35-40 people in attendence, about 1/3 QA, 1/3 developers, and 1/3 security people, with a couple managers and a couple of us sysadmins in attendence.

Michael opened up by talking about why web application security is important now, and then delved into describing and demoing 4 different attacks against web apps: XSS, SQL Injection, CSRF, and Ajax attacks. While this isn’t new to me, personally, I don’t think I’ve seen live demos of these attacks before, so that was a step up for me (come on, we don’t get this kind of thing in Iowa every month!). He talked about reflected and persistent XSS issues, with a demonstration of persistent XSS. Then both verbose and blind SQL injections. After a break we saw CSRF and Ajax demonstrations.

I do want to mention the tools used or mentioned. Oh wait, gosh, almost everying is done using just a browser. Of course, this means almost anyone can start picking this up and learning how to find these holes (increased risk!). Michael did mention Absinthe as a blind SQL injection automater, Live HTTP Headers (firefox addon), FireBug (firefox addon), and SPI Proxy (part of the commercial product WebInspect). The latter was used to intercept and change browser-server requests in Ajax pages– very cool!.

He then closed out with brief looks at SPI commercial tools WebInspect and DevInspect, which really both look nice for dev and security teams to automate and standardize their testing. My only brief nitpick on the presentation was the use of AJAX as an acronym in the slides, but he did mention that it is no longer really intended as an acronym anymore, and has been used to simply describe new web behaviors. Kudos for hating on “Web 2.0” as a term, since I hate it as well.

Nitpick aside, the workshop was well done, a decent way to spend a morning away from work, and provided good information. I’d recommend it for anyone who is already not a web application security guru and knows those above attacks and tools inside and out. And no, it had no marketing spiel or slant to it.

lessons from a cyberdefense competition red team part 3

This is a 3-part account of my experience on the red team for the ISU CyberDefense Competition. Part 1, Part 2, Part 3

This section is just to document some of my feelings on organizing a red team. Overall, I don’t know if there are wrong ways to organize a team, but here is just some ideas and thoughts.

1. Do a brief round of introductions and specialties and background; newbies are welcomed to say they’re newbies. This gets everyone’s name out there, breaks the ice for the shy ones, and helps everyone know who to ask for specific expertise. This can also let everyone know who the person in charge is, i.e. whom you ask for direction or information if needed, such as where to set up and how to connect. This person will need to repeat much of this for any latecomers.

2. Assign people tasks, rather than targets. App specialists tend to skip obvious network holes and can get distracted by app holes in various teams. It is best to keep people doing what they’d rather be doing, and giving all teams a more equalized enemy. Newbies can get pretty good with scanning as they go, but a newbie assigned a team may give that team far less successful attacks with which to evaluate their defenses.

3. Make root the goal. Sure, DoS and service interruptions from a Nessus scan, and web defacements are fun, but really make root and total ownage the end goal. Create persistent backdoors and get inside. Even a team that thinks it was up most of the event may have been completely owned and leaking valuable information to outsiders.

4. I would consider DoS a valid attack in a competition where uptime is a scoring criteria, but only insofar as configuration errors make the DoS attacks possible. In other words, preventable from a practical standpoint. Nonetheless, DoS shouldn’t be used constantly, and only to illustrate the vulnerability and drive home the point with some downtime and points loss. After the point is made, ease up and let the teams and attackers get more out of the experience. (Imagine your team is being DoSed and you don’t really know how to fix it…and it lasts the whole competition…that sucks pretty hard for just not knowing maybe the one config change to fix it.)

5. Don’t overlook the obvious deficiencies. They may not lead to root, but noting things like a lack of SSL on logins or an MS Exchange server hanging out in the winds of the public net can be important notes to make when evaluating team performances. They’d be dings on professional evaluations, so may as well ding them here as well.

lessons from a cyberdefense competition red team part 2

This is a 3-part account of my experience on the red team for the ISU CyberDefense Competition. Part 1, Part 2, Part 3

Observations on cyber security
– Web application attacks and web site defacements are fun and tend to gather attention, from attackers and onlookers alike. Sadly, such attacks first of all don’t bring services down, and so don’t penalize teams all the that much. Secondly, they result in little gain for the red team; a short-lived, and very public victory that results in no root access, no further gains, and most likely closed vulnerabilities.

– Network and OS attacks are or have gone out of style, even against systems that have their balls left hanging out into the public network space. Several ripe systems survived a long time while people tinkered with web app vulnerabilities. Beyond nmap scans, I saw very little network twiddling or system/service attacks. This may mean over the next few years, organizational network security and hardening may atrophy as everyone focuses on the web and clients.

– The perimeter is still a battleground, wherever it may be. One team was rumored to be running ipv6 on their inside network, which would have made things interesting for the Red Team had we ever gotten inside that network. Unfortunately, most efforts were spent pounding and poking the external surfaces and not cruising the internal networks. Note that while a web server is part of the perimeter, it has several exposed layers: application, server app, OS. And someone wants to say defense in depth is dead?

– A majority of the time spent by red team members was exploiting holes in the web apps. There were several openings to upload files and execute php code on the web servers. Unfortunately, once these attacks became apparent, such holes were closed and access very limited. Some teams opted to break upload capabilties, others removed such sections from their site, and others were more correct by not allowing specific php executions or overwrites. Some teams left their /etc/passwd file exposed to such attacks, but all cases had shadowed passwords. More defense in depth…

– Doing the fundamentals greatly increases survivability on the Internet. Put up a firewall and make sure your perimeter doesn’t leak information or have excessive holes. Keep patches up to date on systems and services. Change default passwords and accounts. And after those are done, then pay some special attention to the users such services run under, the web applications, and the web servers. I can see why network attacks are being eclipsed by web-based attacks because the ability to secure web apps requires an amazingly huge skill set and experience/knowledge. Do you know how to securely write interactive php apps running on Apache such that they hold up to attacks? That’s a job in itself for more than just one person to administer. Once the fundamentals are taken care of, it really takes a special attacker to make headway into your network. Do they have 0day exploits for your patched services, or the ability to create them? It really does take special skills to do that. I think only at the DefCon CTF competition might there be such expertise in an organized competition. If you get the fundamentals down and your web apps and servers are solid, start changing service banners to really raise the bar for attackers. The fundamentals are money in the pockets of your company.

(A side note, that not every IT person knows the fundamentals, even today. A lot of these students went into this competition missing the fundamentals, and many more will leave the program without a firm grasp of them. LEARN THE FUNDAMENTALS!)

– For the love of God, make an attacker’s life difficult. Make the firewall not respond to closed ports. Make my nmap scan take a long time and throw everything back at me as filtered. Make me earn my scans. Make sure egress on the firewall is strictly configured so even if I do get something planted, it might not be able to call back (and will be exposed in the logs).

– Read the fucking logs. Wow. Once a server is up and running and a service is being used, tail all the logs. The logs will reveal who the attackers are and what they are probing. Especially those web app logs! Typically once a vulnerability is found, the one red team member exclaims his find and the other 14 members of the team clamor over to see it themselves. Suddenly a ton of hits on a seemingly normal part of the site may be cause for alarm. And after an incident does occur, those logs are your survival. You can put a defaced site back up, but it will just get owned again. Check the logs and close the holes, either by changing the code, removing the offending pages, or adjusting server protections to disallow certain things.

– Similar to logs, it can be amazingly insightful to have an outside packet logger or IDS system hanging off the external interface of the perimeter. Even a DoS against the firewall can be detected and seen and diagnosed and fixed with such data. Without this data, a few teams were left wondering why their service seemed down. They looked at the service and box it was running on rather than the firewall on the network.

Coolest attack
The most interesting attack I saw was actually a DoS attack. The attacker called it a reverse LeBrea attack, although it could be called tarpitting a server or FIN starvation attack. In it, the attacker opens hundreds of valid TCP connections and then begins the teardown sequence by sending an ACK/FIN packet. The server responds with an ACK and its own ACK/FIN packet, and then waits for the final ACK response from the attacker. The attacker, however, does not send that ACK packet back, which keeps the server’s connection open. Nicely enough, the server itself is not very busy since it is only waiting for hundreds of open connections to end. This can be eliminated in numerous ways such as detecting DoS attacks and adjusting the firewall to block after a threshold of connections have been made, or by aborting out of TCP teardowns or lowering the wait time. Iptables is susceptible to this unless steps are taken to correct the behavior.

Some false lessons
While lots of lessons and insight can be learned, there are some red herrings and false lessons that hopefully no one takes away.

– Applications are not always secure. Almost every team was running fairly new versions of their applications and systems, from new phpBB and SquirrelMail implementations to new Ubuntu boxes. While it might seem impregnable today, will it be impregnable in a year in a company with poor patch management? Nope. Those little applications like a wiki or phpBB can quickly draw cold sweat on the necks of IT admins…

– While the red team had a lot of talent in one room, I still wouldn’t consider any of them black hat hackers by any means. Most were still students, and only a handful were security professionals. The skillsets only go so far, but the real world can throw any level of skills at you.

– Attacks can still come from within. Pretty much out of scope was social engineering on any scale that would allow unsupervised physical access to systems. Also, attacks on clients such as emailed trojans was not really possible. There really weren’t that many systems on the inside to dupe and gain a foothold. This was more like attacking a standalone DMZ. But don’t forget every system and user is a target.

– DoS attacks are still debilitating and not only can end your night quickly in a competition, but can close a business just as quickly in the real world. And for as debilitating as a DoS attack is, it is typically the least planned-for and one of the harder attacks to thwart, in theory. (Ok, DDoS is harder, just to pre-empt commentors saying it is easy!)

Personal lessons
I’m out of practice. A competition is not the place to fire up Metasploit 3 for the first time (although thankfully I have used Metasploit 2 in the past). Likewise, know general tools to use for basic stuff, and practice them. Domain transfers, nmap scanning, OS/service fingerprinting (both from a scanner but also from just using the services, like Apache running on Ubuntu). I’m rusty on almost everything, so practicing is definitely in order. It’s just one of those things I don’t get to do on a daily basis on the job (or even weekly or monthly!). Know BackTrack tools inside and out. Be familiar with wireless attacks both as an associated client (airpwn, hamster/ferret, rogue AP, MITM) and as an outside attacker (DoS, WEP cracking, IV generation). Knowing these things well up front goes a long way to being an efficient attacker. Just like the defenders, attackers need to know the fundamentals, and practice them regularly. This can lead to less time spent relearning tools or settings, and more time being surgical and more creative. It can also mean less jubilation at low-level triumphs; rather thinking how to leverage those hidden lower triumphs to get the most gain over the long run.

lessons from a cyberdefense competition red team part 1

This is a 3-part account of my experience on the red team for the ISU CyberDefense Competition. Part 1, Part 2, Part 3

This weekend Iowa State University held its annual CyberDefense Competition in Ames, Iowa. The event is hosted by students and faculty from the Information Assurance Student Group and the Electrical and Computer Engineering department. In the event, teams of students attempt to deploy and manage various services representative of normal business applications. During the 20 hours the event covers, the teams are scored on their service uptimes as tracked by network monitoring (Nagios) and other neutral teams acting as normal users of the services. In addition, much like the real world, there is another team of students, faculty, and area professionals acting as attackers, intent on owning and bringing down those offered services. The services the teams were required to offer were web services (with pre-packaged web content), mail (smtp and imap), a telnet shell, ftp, wireless access for normal users, and dns to get it all working.

The teams are made up of regular students, I believe mostly for class requirements in a couple classes. There were 15 teams ranging from what looked like 4 members up to maybe a dozen. The students have had time in advance to plan and implement their services. To illustrate the aptitude of the teams, at the start of the event only about half the teams had their services up and running. Even through the course of the night, not every team was having success getting services up, while other teams were more advanced and running ipcop, pf, iptables firewalls, and hosting services on Linux or even MS Exchange on down to SquirrelMail or IIS mail. This illustrates the widely varied skill levels in these teams. Some teams had everything choked down behind a firewall while others had disparate boxes sitting on public IPs and others were having problems with DNS configs.

The “world” for this event is a bit interesting in itself. The teams were allowed to use publicly routable IP addresses because the event was hosted in the Iowa State ISEAGE system. ISEAGE (Internet-Scale Event and Attack Generation Environment) is a mostly closed network that simulates the Internet on a small scale to model attacks and other research activities.

At the end of the event, teams were scored and a winner announced, but more important are the lessons learned from the event itself. How difficult or easy is it to put up and manage services for a business, attent to the needed systems, and react to security events. What works and what didn’t work. Hopefully everyone went away feeling at least a bit more enlightened about the world of professional IT, no matter what their end performance in the event itself.

On a more personal note, I certainly wish I not only had more interest in the field of networking and security when I was in college, but I also wish we had these kinds of groups. I graduated in 2001 with a degree in MIS from ISU, but I never had any security courses (and almost no security emphasis in programming or other classes) and my only networking exposure came in my last semester. I graduated having never really installed an operating system or upgraded one, nor knowing much of anything about normal business services and technology. I’m amazed where I’ve come since then, and I’m amazed that college studies are starting to catch up to the real world of IT and out of the academic “let’s just teach everyone C and theory” practices. A competition like this where students install and work on these services is downright invaluable, even to those who didn’t successfully get services running. IT is so much about doing things, not about sitting in a classroom and listening to a lecture about the theories.

license to be a digital nuisance

Michael has Friday off. Michael has the day off because he will be attending Iowa State U’s 3rd (?) annual CyberDefense Competition as a member of the Red Team. Michael would link to the site, but even his Google-fu is not yielding up a currently active site. This CyberDefense Competition hosts teams of college students trying to run servers and systems in a hostile environment for over 24 hours. Michael anticipates having a fun time and likely learning far more than he is able give.

reboot a system in powershell

For a script I have that maintains our systems and installs new versions of our web code, I have the occassional need to also reboot a server during after that install. Like most things programming, there are several ways to script a reboot.

This first example (my preferred method) reboots a remote system.

$objServerOS = gwmi win32_operatingsystem -computer servername
$objServerOS.reboot()

Leave off the “-computer servername” to reboot the local system.

This second method is similar. The following lines will reboot the local system, force the reboot of the local system, or reboot a remote system.

(gwmi win32_operatingsystem).Win32Shutdown(2)

(gwmi win32_operatingsystem).Win32Shutdown(6)

(gwmi win32_operatingsystem -ComputerName Server).Win32Shutdown(6)

Here is a list of the codes that can be used.

0 -Log Off
4 -Forced Log Off
1 -Shutdown
5 -Forced Shutdown
2 -Reboot
6 -Forced Reboot
8 -Power Off
12 -Forced Power Off

Pipe $objServerOS or (gwmi win32_operatingsystem) to Get-Member to see more goodies.

There is a good chance the above commands will error when trying to do a reboot, complaining about privileges not held, even if you’re running as admin. Add a privs enable line in between the above two, and it will process just fine.

$objServer = gwmi win32_operatingsystem
$objServer.psbase.Scope.Options.EnablePrivileges = $true
$objServer.reboot()

richard clarke’s five steps to save the internet

Richard Clarke recently spoke at a conference and listed five steps to save the Internet. Here is a brief on the five steps:

1. National biometric ID
2. More government oversight of the Internet
3. Nonpartisan government oversight to protect privacy
4. Secure software standards
5. A closed Internet for critical services like the power grid

1- I don’t like the idea of a national ID or using biometrics, but I do know that social security numbers are antiquated and broken. They’re just not working anymore in our ultra-efficient information age. I agree change needs to happen; I don’t know what solution I would like. Something similar to what all the cyberpunk visionaries have written about for decades is most likely inevitable. An inevitable evil. I’ve long felt that a major hurdle for the Internet deals with identity; trusting it and verifying it. And no, I don’t think OpenID is the obvious solution.

2- I don’t like this either, and hopefully it won’t happen; but I am surprised ISPs and the Net have held out this long and this well. Hopefully it stays that way.

3- Maybe I’m old-fashioned already, but isn’t privacy oversight covered by the judicial branch?

4- This is obvious that we need better standards. Is the government the proper standards-bearer? I doubt it, and I definitely wouldn’t hang my hat on getting this done enough to make an ultimate difference. It will help, as part of a blended improvement to cyber security and software security.

5- Hrm, again, I might be old-fashioned, but I call this either a private network or a network with strong perimeters and controls. I think Clarke is looking for attention and media drama by calling it a closed Internet, but I don’t think that’s what he’s really meaning to talk about. Why do you need a closed Internet and how is that different from a private WAN network? Open access to the web and other services? You mean like the walled garden from AOL? I’ll dismiss this point because it is just bait and hype, nothing more.

defcon 15 video on the dirty secrets of the security industry

Finally getting around to watching DefCon videos, and I started out with Bruce Potter’s Dirty Secrets of the Security Industry presentation. I’ve seen recordings of Bruce Potter talks before at ShmooCon, and I’ve enjoyed his presence. Definitely a cool guy with a lot of passion for the industry, and I think he’s open to creating discussion, even if he knows he’s wrong and just trying to get everyone to think. I can’t help but admire that! Here are some notes, followed by reactions of mine. I definitely recommend watching this talk. Everything in blockquotes are paraphrases or quotes lifted from the slides and presentations.

Bruce opened by talking about some foundational concepts and history of security. He made a point to show that security is still growing and making more and more money. He then went into his dirty little secrets.

Secret #1 – Defense in Depth is Dead – The problem is in the code. We’ve always had bad code. Fix the code. Firewalls don’t help things that have to be inherently open, like port 25 to the Internet for the mail server. Spending way too much money and time with defense in depth! Need type safety (programming), secure coding taught in schools, and trusted computing. We need better software controls on our systems, not better firewalls.

I’m hearing a lot more about this lately, about how we need inherently secure systems and devices and protocols. 🙂 All his points are good, and I really don’t oppose outright a viewpoint like this. We need better training for software developers and we really do spend a shit-ton of money on more and more defenses that are band-aids to deeper problems.

However, I don’t think defense in depth is dead. I think he has great points, but I’d throw a shmoo ball at him for the sensational title of the secret. 🙂 We’re humans, and humans are producing code. It just takes one incident (which he says in a later slide) and defenses can break. That’s the point of defense in depth. Not necessarily about band-aiding insecure code, but rather ensuring that 1) we account for mistakes and unknown holes, and 2) we make sure attackers have to really try, or collude, or take a lot of time. If I can solve issue GER, and that’s your only defense, I win. If I have to solve issue GER plus LIG, I’m stuck…or I have to find help or spend more time breaking in.

This defense in depth approach only makes it *look like* we’re just band-aiding insecure code, which we kind of are, but that’s just an ancillary issue. To put it better: it’s an arguable position. (Marcin, if you’re reading this, yes, I use these $10 words all the time!)

Secret #2 – We are over a decade away from professionalizing the workforce – Much of our jobs is learned through self-education, not professional education centered around security. How do we codify and instruct the next generation? Security is everyone’s problem…because no one really knows how to properly do it. We can’t train all our professionals, how do we expect to train all our users? Users need tools that they can’t screw up; that don’t require education to be used securely. Years and years away from making this better.

A-fucking-men. First of all, he’s got a point about not being professional yet. I went to school and got an MIS degree, which is, in effect, Comp Sci Lite. Did I get any information about security? Not a bit. Hell, I was barely prepared for a real technical job…I was more prepared to be a clueless analyst than technical. Bruce is absolutely fucking right that we’re almost all completely self-taught, either on the job or on our own. That’s not a professional workforce or industry. Not yet anyway.

I love his mention that security is not everyone’s problem. I love his mention that users don’t need training, they need tools they can’t fuck up. Absolutely! Likewise, if we pour a bunch of money into training, and an idiot or new user shows up and makes a mistake, all of that is wasted. We need the technological controls, and we need the secure systems, and we need the simplicity more than we need high-end training such that security can be everybody’s problem. That’s not to say I’m all about de-perimeterization!

But that gets back to defense in depth. Users will make mistakes, which is also what defense in depth helps to mitigate. Yes, I think the industry has gone overboard and yes, we spend way too much money on many levels of defense, and we need to start spending that money smarter, on better defenses, and more secure foundations. More on that coming up…

Secret #3 – Many of the security product vendors are about to be at odds with the rest of IT – The security industry has sold a lot of defense in depth; a lot of money that isn’t going to securing the foundation. Bruce uses Microsoft as a case study: Microsoft tries to make a more secure foundation, but then the vendors start complaining, and Microsoft has to bend and allow unsigned driver interaction.

Excellent points. In fact, this is an issue in more than just security. Lots of money are being spent on software and systems and security, and we’re starting to question, “Why?” “Why did I spend XXX on 3 years of software assurance for MS SQL Server, when no new product came out from 2000 until 2005?” I have used the example of Microsoft trying to secure its own product in the past, because it dramatically illustrates how our landscape has changed, and how the maturation of the security industry has agendas to protect. I’ve been saying that Microsoft can’t just out and create a secure OS anymore. The vendors won’t let them. They’ll have to do it slowly, like boiling a lobster.

Defense in depth may not be dead, but he has a point that we really are spending too much on it.

Secret #4 – Full Disclosure is Dead – There is too much money to be made in selling bugs; even companies are paying for vulnerabilities. We want to make live systems more resilient to attack, but this market for vulns means those companies are (potentially) profiting at the expense of the end user.

Again, very true, and that last sentence I don’t think I had realized outright before this talk. I still believe in full and/or responsible disclosure, but at least now I have some logic behind the bad taste those “pay for my vuln” scams leave in my mouth.

Bruce quickly closed out by re-emphasizing some of his suggestions:

Recognize that the landscape has changed. Push vendors to make products that actually create a secure foundation, not just more layers. We need to create a more formal body of knowledge for info security, and hold each other accountable.

This is an excellent talk, and I really love what he brings to the table. He wants to stir things up a bit, open discussions, and maybe even be wrong. But that’s the sort of openness we need to keep striving for. He had a real brief mention about being open and sharing information rather than bottling it up to sell in a non-disclosure vulnerability; to not stand in line politely but to keep the energy we know we have when it comes to toeing the line. I can only imagine how a group conversation at a bar can likely last all night long about this stuff!

in linkedin

Oh noes, I’m in LinkedIn. Those of you who have bugged me…ok, ok, I’m digging it. I like that only people in my “network” get to see anything worthwhile about me. Anyway, if you read my blog at all and are in my network, chances are you’re “ok” to add, so feel free to find me, Michael Dickey. Or email/comment and I’ll find you instead.

when production data is allowed to visit the slums

Adam over at EmergentChaos posted this blurb, which I’m also going to quote, in regards to an Accenture data loss incident:

Connecticut hired Accenture to develop network systems that would allow it to consolidate payroll, accounting, personnel and other functions. Information related to Connecticut’s employees was contained on a data tape stolen from the car of an Accenture intern working on an unrelated, though similar project for the State of Ohio. (The tape also contained personal information on about 1.3 million Ohio residents.) The intern apparently had been using the Connecticut program as a template for the Ohio project.

Holy shit, do I hate when developers insist on using protection data in development environments. It is amazing how difficult that fight can be to get them to use test data, or to take production data and thoroughly scrub it on the first copy down. Of course, later on they want “refreshes” downward, or they start sharing amongst themselves when one wins the fight for their project…

Couple that with the fight to allow them to put such data on their laptop, and you get a lot of bad blood pretty quickly over just two out of a gazillion issues.

It is going to be very important in coming years that companies who allow their data to be used by someone else will want written statements about who has access to their data and where. Will it be on development systems in the squishy internal network, or available for an intern to query out and take home? Can you provide names of everyone that will have access? If you have any DBA duties, start preparing for this storm now! These questions are being whispered now*, but often aren’t taken too seriously…yet.

1- Know who has access to what data, including queries as well as full database access.
2- Provide a process for requests and approvals for access to databases.
3- Know who ran what and took out what data. If an intern pulls a bunch out, you better well know it when they do. Know how to pull those logs and massage them for the answers.

These are just a few basic management questions that, if not answered, will leave them in a position to make uninformed decisions and actions.

As a side note, other questions are being and should be asked about the whole lifecycle of that data when it leaves the nest. Is the transmission of that data secure (SFTP, FTP, Web, Email…)? Is the first stop for that data secure and/or temporary (your contact’s email box, the ftp server…)? How does the data get to the desired location or is it kept somewhere internally before being used (uploaded to a file server, sits in someone’s PST file, gets backed up to tape from the ftp server…)? When at the end location (database, hopefully), who has access to it? When the work is done or the contract terminated, what is the data removal process (tapes, servers, databases, official backups, backups the developers have made…). Yes, it’s more than the DBA, but really the easiest place to start is with the DBA duties.

powershell: list of sites in IIS

Getting a list of all the sites in IIS 6 is typically as easy as right-clicking Web Sites and choosing Export List. I decided I wanted to do this through PowerShell. I’m sure there are plenty of ways to do this, but this is one I got to use today.

$objSites = [adsi]”IIS://serverorlocalhost/W3SVC”
foreach ($objChild in $objSites.Psbase.children)
{ $objChild.servercomment }

This should output all the names of the sites that you would see in the IIS management console. If you want to know what else can be pulled, grab one of those objects and pipe it to get-member. I haven’t figured out how to pull the Home Directory, but IP should be under .ServerBindings, the ID should be under .Name, and so on. I suspect IIS7 will be even easier to manage via PowerShell.