vmware server 2 on ubuntu 9.04 is painless now

Installing VMWare on Ubuntu is surprisingly easy these days. It has been a couple major releases since I did so, but this weekend I rebuilt my VM host box.

I installed Ubuntu 9.04 server and chose the Virtual Host option. I really don’t have a good reason why other than that’s what the box would be. Once done, this leaves the box at a command line prompt.

After a little reading, I found out that VMWare Server 2 now installs with a web-based admin interface and not the normal GUI-required interface. Whoa, big improvement! I don’t need Gnome anymore!

The rest of the installation went smoothly with the only difficulty coming from downloading the VMWare Server 2 tarball through Lynx (hint: sign up for a throw-away account on a different box, then just sign in on Lynx). But after that, no more magic tricks to get VMWare to work on Ubuntu. I accepted all defaults other than VM storage location. I had a passwd set for root, so I could use root for now as the login.

remote exploit in soulseek p2p client published

I’ve long wondered when we’d see more P2P client attacks; I mean really, thousands of clients always-on and accepting traffic through the network?

Seems my P2P network of choice, SoulSeek, has an exposed vulnerability in the client app since at least July 2008. Pretty nifty! The software accepts and processes queries for your shared files. Seems this query length isn’t handled properly.

Just think, I could continue to be using rootable software for years if not for some measure of full disclosure. Pah.

I like SoulSeek and have used it for about 6 years now as my primary music exposure tool, although I am open to new places since my searches are not always as successful as they used to be. What’s more, there has not been a whole lot of movement from SoulSeek developers or the community in quite some time, although the forums still have a trickling of activity. It is not surprising that the exploit author was getting no response. I’ve had the feeling in the past year that this is a bit of a headless beast anymore.

Of note, the exploit author mentions using a Python-based SoulSeek client. This probably means there is plenty of documentation on what SoulSeek does and how to interact with it.

cnet interview of undercover fbi agent mularski

Snagged a link from I-Hacked.com which goes to an interview with an FBI agent who was undercover for 2 years to ilfiltrate a major cybercriminal group/forum. Amazing read!

(By the way, using a name referenced to Teenage Mutant Ninja Turtles is automatically awesome.)

I find it interesting to hear that cyber criminals often do not match the perception we have of thugs and hooligans and otherwise very scary people who commit crimes. A lot of these guys are, as he said, just misguided people who are otherwise very nice and normal. This should speak loads about how we define and view the moral right/wrong lines these days. Being a physical criminal (not including white collar corporate crime) seems to have certain physical traits or maybe even pyschological (arrogence) ones. You can often “feel” that someone is a criminal just by how they dress, carry themselves, interact with others. Being a criminal in cyberspace may be just as easy, morally, as playing an avatar in Second Life or character in World of Warcraft. (Granted, this will probably change as organized crime brings the physical bluntness into the cyber ranks, as alluded to in the interview.)

One question that didn’t get asked that I would have asked: “Do you work alone undercover or do you have a team of technical experts helping out as well giving you advice and walking you through things like securing the servers?” Really, it would seem easy to turn around and ask your cadre of geeks various questions, since they can’t see into his office. You’d just have to be careful that only one person ever did the “talking.”

powershell: perpetual scripting restarting itself

I’ve been working for a while on a way to keep a perpetually running script on a server running, even though it does have a slow memory leak (not surprising since I don’t think scripts are meant to run forever). My previous attempts are ok, but leave some small issues on the table.

It might be easy to suggest just setting a Scheduled Task up. Well, yeah, that is easy, but it is my policy to not run Task Scheduler on servers unless absolutely necessary, and never on externally accessible servers.

As another alternative, I have decided to have the script that is infinitely running check its own memory use, respawn a new copy if the memory use is too high, and then kill itself. This is actually pretty easy once I started looking into it.

# spawn new process
$spawner = New-Object system.Diagnostics.ProcessStartInfo
$spawner.FileName = “powershell.exe”
$spawner.windowStyle =”Normal”
$spawner.Arguments = “-noexit -noprofile -command cd d:\path `; ./script.ps1”
[system.Diagnostics.Process]::Start($spawner)

# kill myself
Stop-Process $pid

The only fun trick here is $pid, which is an automatic variable that holds the ProcessID of the host Powershell.exe process. Also notice the two commands in the arguments section. I first change my directory (cd) over to a path. If I don’t do this, it starts in a default path. Then I start up my script like normal.

Checking memory is pretty simple as well.

$Process = Get-WmiObject win32_process | where {$_.ProcessID -match $pid}
$Memory = $Process.WorkingSetSize / 1024 / 1000 }

google reroutes some traffic to asia due to one system?

Really? Errors in “one of our systems” at Google caused issues to be felt across North America? And it was because something was rerouted over to Asia instead? Yeah, that doesn’t sound both scary and suspicious for multiple reasons.

Some web traffic? You mean some web traffic in the US or some web traffic worldwide? There’s a big difference there, especially if you live in an area that was, in fact, fully affected, not just “somewhat” affected.

moth: vulnerable vm image for web app security scans

If you need some vulnerable web sites and scripts to test your security tools against, Moth has been released. Moth is a VM image hosting various vulnerable services. Their blog mentions there is a listing of the vulnerabilities contained in the image, but I’m not sure if that is just a list or if there are also details on how to leverage or find those vulnerabilities.

I think the point of this project is to provide a test bed for security tools and also demonstrate the workings of a couple web app firewalls (mod_security and php-ids).

useless notes from the verizon data breach report 3

I suspect some of the most discussed pages of the Verizon DBIR were 41-43: PCI. I’m going to add to it, while keeping in mind these cases were all breaches which most likely occurred in companies lacking high levels of security controls and awareness, probably lacking PCI initiatives, with a couple highly targeted victims. I’m also going to assume every breach victim is supposed to be meeting every PCI requirement equally, though this may not be the case.

1. 19% claims/found compliant, yet had breaches. Doh. More on this later in bullet 5.

2. If 19% were PCI compliant, I would expect Table 10 on page 42 to have no number less than 19% for each PCI requirement. Oops, 5 requirements are below 19%. It’s possible I’m misinterpreting.

3. No PCI requirement over 68%! Ok, I know this data is going to trend very low because they’re breach victims who sought external help, but only 30% utilize a firewall (maybe lack of internal firewalls?) and only 62% use/update AV (the rest Linux?). Some of these other requirements I can understand since, strictly speaking, they can be hellish to meet: Req 6 (dev/maintain secure systems/apps), Req 10 (track and monitor access), Req 7 (restrict to need-to-know [I think few are fully honest on this one, or just say everyone needs to know]), Req 8 (unique ID [does using service accounts to connect to the database from the apps count?]). Seriously, those four requirements are a potential nightmare depending on how strictly you scope them. They are necessary, don’t get me wrong, just nightmares, so I’m not surprised how low they score.

3.5 I’m going to go out on a small limb here and say thank you for these numbers in Table 10! First, I don’t think we get a good enough picture anywhere about how compliance really appears, especially when it’s in someone’s best interest (they’re paying you to pass them) to go easy. And I would suspect that the Verizon investigative team is more thorough and more technical than most auditors (sorry, but I gotta say it!). This might be because auditors can only see what they’re shown, but investigators get to see what they’re shown and where they follow the trails. Sadly, this doesn’t get it’s own point in my list because these are just numbers from victimized companies; not necessarily indicative of the average company who may be more PCI-aware.

4. “…these breaches, in general, did not occur in organizations that were highly compliant with PCI DSS.” (pg 43) I’m not sure I could really make this statement based on the data in the report. Yes, of the victims it looks like there may be a correlation between PCI compliancy and being victimized, but I’m not sure you can conclude that without knowing the full measure of how many companies are compliant and how many are not. By the way, I think it really should be obvious on it’s own that lack of PCI compliance indicates crappy security; I just don’t think the data necessarily backs this.

5. I shouldn’t include this, but I’ll briefly rant that PCI has an image issue, one they maybe didn’t create, but they’ve done little to fix it: the perception that PCI = secure. In this perception, this report is a dagger in the side: 19% were (supposedly) compliant yet suffered a breach. CSOs collectively groaned at that mark, especially those that raised PCI up on that too-high pedastal to cthulu their budget gods.

useless notes from the verizon data breach report 2

One of the major recommendations of the Verizon DBIR is to ‘collect and monitor event logs.’ You might think this is a no-brainer, but further into the report it reveals a stupid majority of these breaches were “found” due to third-party notification (70% on pg 38).

Hell, the next two categories I would consider “lucky” events where someone noticed an issue and poked around enough to uncover the problems, so this adds an additional 24% of the breaches. In fact, only 8% of the breaches were found by what I would consider detection methods (unless the audit parts were luck too). Yuck. This means internal detections are failing or not being used properly. (Granted, the data points in this report are from people who most likely do not have strong security controls and programs in place, so these numbers might be lower than general averages.)

This morning I read about a UC Berkeley breach on the LiquidMatrix site. This breach went undiscovered for 6 months:

“…when administrators performing routine maintenance came across an ‘anomaly’ in the system and found taunting messages that had been posted three days earlier…”

In other words: Some admin was on the box for other reasons and happened to find the messages. “Hello, what’s this? Oh crap…” In other words: sheer fucking luck. (Or bad luck, if you look at the 6 months it took to see issues…)

We need to continue to push for 3 things:

1. Better detection tools. I consider this the first and least important item of these three. Partly because blaming tools is like blaming someone else for your issues. “Well the tool sucks so it’s not my fault!” That’s an irresponsible knee-jerk reaction. Yes, the tools need to get better and become more efficient, accessible, and smarter.

2. Using the tools. Running tools to gather data, or hell even make decisions on data, still need humans to monitor them! Collecting logs that never get looked at is nearly as bad as never collecting the logs. Collecting logs that a system generates decision points against and issues alerts that are never responded to is in the same boat. Throwing in tools but not having administrators tune, watch, and respond is silly. I would also include properly using the tools in this category, especially when it comes to administrative decisions. Were do you put your IDS sensors? Where do you put your Tripwires? What files/folders/systems do you monitor? What are your tuning standards and written response policies? Is there any consistency to your investigations?

3. Get the staff. I’m not going to be one of those people who think tools should be perfect. I think it is perfectly fine for an admin poking around a server to be the discoverer of an incident that slipped through some cracks. What I don’t think is perfectly fine is this action taking 6 months. Would a tool have seen ‘taunting messages’ on a server? It might have noticed new files, but would never be able to read those files and deduce their intent. I firmly believe a human needs to poke around and “feel out” anomalies when possible. If an analyst has few alerts on his table, he should be empowered and encouraged to poke around, scan some file servers for recently updated files, run surprise audits on access levels, etc.

physics envy, art of security, and patterns

Bejtlich continues discussion on his blog about risk metrics with a post on “physics envy.” I’ve followed several articles and postings lately (i.e. several lately, but also over the years) about this topic, and it’s nice to see these other thoughts.

I was actually just thinking this weekend how much security is still an art no matter how much we want to apply numbers and statistics to it. I can probably tell you what risks you need to look out for and which ones are not going to be a big deal. But it becomes *much* harder when I want to give some hard numbers or determine a value so I can tell you what to spend.

I can prioritize efforts as well, in my own mind, but trying to justify them with numbers and budgets becomes mind-swimmingly annoying. I can tell you, from a gut level, when you’re spending too much to protect something silly. I can outline and detail an effective security posture, but if you want me to back it with metrics, I’m going to hate you.

There might even be people who disagree with my prioritization and steps, and that’s unfortunate that I’m right and you’ll just have to be wrong. 🙂

Is this like trying to apply a level of precision to security spending that we just can’t have because there are simply too many factors? Is this like trying to find that magical formula to solve the stock markets (or your perfect fantasy baseball roster or the exact match-ups in the Final Four)?*

I suppose the old approach is still best. Do what measurements you can, and at the very least try to align the results with what your gut tells you, and then be consistent with those practices over and over and over… But that still makes me feel like we’re just tainting numbers to what we want, which devalues their integrity completely.

* Hell, I think it’s natural that we have this crazy tendency to identify patterns, even those as silly as lucky underwear on night games, or a certain routine on game days to praying the right amount for deliverance… It doesn’t help that nature so often promotes this tendency by being exceedingly mathematical from chemical reactions, to photosynthesis, to fractals, to the Nile (I sound a lot like the narrator from the movie Pi, but that’s coincidence as I would have the same thoughts regardless…hell I don’t even remember that movie but for the end and the music). But this doesn’t all mean there *must* be a pattern, especially with the ultimate variable: human choice.

useless notes from the verizon data breach report 1

I have been slowly reading through the Verizon Data Breach Report (which is awesome!) and one thing kept niggling at me. As I read through it, this popped into my mind: Are the numbers maybe skewed by just one or a couple huge cases?

My Hypothesis: Only one or a couple of breach cases are responsible for a huge majority of the records breached.

So I started taking notes and went back into the report a bit. Satisfied with my findings, I read on not 1 more page before the authors outright stated on page 32: “The top five breaches account for 93 percent of total records compromised.” Way to deflate my balloon!

Nonetheless, it does diminish the value of the graphs dealing with number or percent of records, which I think the authors acknowledged by keying more on the breaches and less on the records disclosed. So that’s good!

Following are the notes I had taken to investigate my hypothesis. They’re here mostly just to hear myself talk, and don’t necessarily have much actual use othewise. But feel free to read if you want.

90 breaches in the study (pg 6)
285,000,000 records involved (pg 6)
financial services account for 30% of the breaches (~30) (pg 6)
financial services account for 93% of the records (265,000,000) (pg7)
external sources account for about 93% of the records (266,788,000) (pg 11)
median of external records per breach is 37,847 (pg 11)

I’m going to guess that all of the meaningful financial services breaches occurred with external sources, considering the numbers above. This means that out of 30 breaches with a total record disclosure of 265,000,000, the average breach should be 8.8 million. If this were a normal distribution, the average and median should be similar, but they’re not even close. To me, this indicates just a couple large numbers, while many of the others were quite small.

95% of records were breached by an attack of high difficulty (pg 28)

There are some other numbers which indicate that there was really not just one single large incident, but at least a few. If there was just one large incident, these numbers would also be nearly 90%, but they’re not:

Financial services almost certainly were targeted by just the larger % types of hacking from the graph on page 17: SQL injection, improperly constrained or misconfigured ACLs, and unauthorized access via default or shared credentials. The attack was through a web application (79%) and remote access & mgmt (27%) and/or End-User Systems (26%). (pg 19) This could certainly indicate at least 2 major incidents that account for this huge number of records breaches in 2008. In fact, I wouldn’t be surprised if one large incident was due to a web app, and a second was a combination of remote access andend-user systems, with those two attacks being the huge majority of the records.

I’m actually surprised that no graph was presented which shows that a huge percentage of the records fell to targeted breaches, as that is what I suspect, at least with highly difficult breaches, anyway.

Certainly these couple financial services breaches housed online data, as 99.9% of all records were online data (pg 30), i.e. payment card records which were 98% of all records (pg 32).

Hell, page 32 confirms my suspicions: the top 5 breaches contribute 93% of all records. Doh!

visiting 5 (or maybe just 2) security pet peeves

A blog article over on ZDNet lists 5 IT security pet peeves. I thought I’d tackle them.

Too many people still believe ignorance is an effective security strategy. – I’m not sure so many people actively believe in this strategy so much as they are just that way. The same mindset that when your toaster breaks, you wait and try 10 more times hoping the issue just goes away and it magically works tomorrow. Or the old cliche of see no evil… Or the other habit that we have of saying something won’t happen to us. Sadly, the author dives headfirst with eyes closed into the “security/obscurity” topic and just ends up sounding closed-minded. Watch how you word these things, please. There *is* value in obscurity, to an extent. The correct phrase is not to achieve security through obscurity only.

People who know nothing about IT security have godlike power over matters of IT security policy. – The examples given (congress, judges, law enforcement…) reek of an “IT guy” who only really pays attention to cnn.com issues as a consumer. Sure, he can manage his home all-in-one fax and 2 laptops on his home DSL…

Anyway, despite the tone, I think this item should otherwise hit the nail squarely, and is related to the first bullet. There are too many people who wield significant power over IT security that should have no business mucking in it other than as an overall business strategic concern. And while there are execs who will say they stay strategic and let their minions do things (yay!), there is also that undercurrent of productivity pressure, top-down, that will steal away valuable analyst time from actually verifying and maintaining security. Ever try to explain to a non-technical person the art of investigating a single IPS alert? You lose them in 30 seconds every time. But 2 days later they wonder why you spend more than 5 minutes on a mysterious alert that could portend ominous happenings on the wire. These same people wonder why you can’t just set up logs and never, ever read them. “But we gather them, right? Oh, it broke 5 months ago and we never knew because we don’t check them? Oh…shit.”

I had more to say about the rest of his bullet points, but have decided to leave it at a summary judgement. The rest of the bullet points reeks of a non-corporate person who runs his home network and otherwise plays backseat IT guy. They’re also narrow-sighted consumerland items that make him seem inexperienced and annoyed that his social network browsing is interrupted now and then by kiddies. (And yes, I have feelings on both sides of the fence when it comes to visibility into communication.)