.: March 2009 Archives
Fun times continue with PCI DSS. Anyone with an idea of security saw all of this coming (and this can be applied to any security checklist...):
1. PCI "compliant" firms suffer breach.
2. Companies/people question PCI.
3. PCI blames firms for not being perfect every moment of every day.*
4. PCI DSS is only guidelines, checklists, that don't actually DO the securing in and of itself
We've all just been waiting for more inevitable data points on the grid of this argument.
The argument revolves around how PCI markets their DSS and how people accept it. If PCI markets it as a rubber stamp approval of ultimate security, they fail. If people expect PCI to be perfect, they fail. PCI can fix this by simply adding the byline: "...this is where you start with security, but this is not alone a guarantee of security."
Of course, we all know how that will be taken: "If it's not perfect, it's useless!" Which is an immature (or common business) argument in a realm where perfection is not possible. Sadly, and this is where the media sucks (and rightly milks it for the hits/attention) and the General Public only has immature thoughts about security. But still, PCI fails for allowing the perception that its DSS will save you, even if that was their intention in the first place.
PCI is no better than any checklist or list of best practices.
* PCI can weasel out of any blame any given day. Just blame the QSA and/or the firm. This is another "law" of security, not just cyber but every sort of security from war efforts to the war on drugs: You can always naysay because there is no ultimate "win" and no ultimate definitions. Another "law" illustrates this, "You *will* suffer a security incident."
by michael 03.02.09 at 12:07 PM in /general
I recently got into X-Box Live (XBL) multiplayer matches in Left 4 Dead and this weekend Call of Duty: World at War.* I've been so far having a good time, but there is something missing in XBL multiplayer that I loved in my previous years of PC gaming.
I used to play Quake 1, Unreal Tournament I, and even the first Call of Duty, all on the PC. When you played multiplayer on those games, you would somehow get a list of servers hosting games and choose one based on various criteria, most likely latency, game settings, player population, and even reputation of the server. When you found a game that played well and was fun, you usually wrote it down or saved it as a favorite. This resulted in a list of frequented servers you played on.
Over time, I became a regular on my preferred servers, and I got to see other regular who were around on that server too. In fact, eventually you get to chatting with them and form a sort of gaming friendship (or rivalry). This was excellent as you could play with and meet several other players over time. This occurred in all three of those games I played majorly, and always resulted in clan invites, friendships made, and carry-over into IRC, forums, and IM. Sometimes you could play weeks before finally actually talking with another regular and chatting it up, having fun, etc. Every now and then you would even learn of other servers your friends liked, and thus expand your exposure.
In XBA, you typically dive into the multiplayer games and get thrown into a random game with a slot open, which is likely just an ad-hoc host in a farm of host servers. There are no server names, no preferences, no continuity to the multiplayer gaming experience; no home "turf." If you want to make friends, you have to do so in the small window of time that you're both in that particular game instance. And even then, you may not be playing on the same team on the next 3 maps!
Last night in Call of Duty there were over 200,000 people playing, and maybe 35,000 in my game type (Team Deathmatch since I'm new). The chances of me seeing any repeat action from players I'd seen before are exceedingly slim. Even in Left 4 Dead, I've only had a repeat player once (notably we both remembered each other).
The way you play repeat games is to friend people you play with, immediately. This results in a watered-down friends list full of people you barely know, friending everyone you possibly could stand to play with again. And vice-versa (considering I still suck, I doubt this is a 2-way street yet!). Even then, you still usually have to join the games as an XBL party or risk playing against them or not at all because their game is full. This can make bad choices in friending people be awkward moments where you'd rather avoid them...
I wonder how clan matches work in this setting? Maybe I'm still missing things in my limited exposure...
Still, there is something to be said about the continuity of the gaming experience and community that forms from discrete servers. It would be nice if XBL had named servers, and if capacity was larger than the named ones, then maybe ad-hoc hosts can spring up for peak times to get all those people looking for a random game. Or just have such a huge pool of "server" names that they never run out.
"Aldaraan #10" is the place to be Friday nights!
* It is already annoying enough to hear 8 year old boys talking with impunity in game, let alone a game that now and then says, "Good fucking job, marines!" I find that many of my jokes and game jabber may not be suitable...
by michael 03.02.09 at 1:32 PM in /general
A common question on security surveys and often an item auditors love to point out because it's "easy" is the question of SSLv2/3 support. SSLv2 is insecure and shouldn't be used. Various sources
can describe (pdf)
better than I, but I will say I don't know if anyone has made SSLv2 attacks very practical, even if browsers dropped to SSLv2 anymore.
So how do you check what SSL version your web site supports?
available as a free Foundstone tool
SSLDigger is a GUI tool that accepts a site (or IP) and digs on the supported SSL ciphers. A nice tool, but it actually gives no distinction between what is SSLv2 and what is SSLv3. However, it does rate ciphers on how weak they are, which can be a nice guide if you're digging down that deeply and enabling or disabling various individual ciphers.
THCSSLCheck is a Windows command-line tool. THCSSLCheck takes things a step further and groups ciphers based on their SSL version, which is a nice indicator. Very clean!
Yup, OpenSSL (Windows and Linux) can also check SSL strength, and might be the easiest test to understand. It also gives some content that it receives from the website. This is helpful if you have a proxy, filter, or load-balancer in the way that redirects SSLv2 connection attempts. The above two tools simply determine whether a cipher negotiation was successful, but they do not report any context. In my case, I have load-balancers in front of my web servers that answer to SSLv2 connections with a landing page saying we don't support SSLv2. So, yes the scan showed a positive, but it's not a real positive. OpenSSL will catch this if you wait a bit and hit enter a few times.
openssl s_client -connect www.mysite.com:443 -ssl2
by michael 03.04.09 at 10:37 AM in /general
We now know how to test for SSLv2. How do you fix it?
IIS6: Well, go ask Microsoft
. It is a registry edit and not a GUI option.
Apache http.conf: "SSLProtocol +All -SSLv2" or even "SSLProtocol -All +SSLv3" Further cipher tinkering can be done with the SSLCipherSuite directive.
For everything else, you need to consult documentation. In my case, I have Citrix Netscaler load-balancers in front of my web servers. In the port 443/SSL vservers->SSL tab->SSL Parameters, I would uncheck "SSLv2" and uncheck "Enable SSLv2 URL." That second one is just the redirect for browsers wanting to make SSLv2 connections when SSLv2 is not wanted. Of course, this can also be done via SSH.
by michael 03.04.09 at 11:12 AM in /general
I've casually used IE7 on a test work machine and on my gaming machine, i.e. not very much and certainly not enough to play around with the interface. Last evening at work we rolled it out to all desktop users. Holy sweet mother is that top bar a cluster of a mess! I normally wouldn't mind it if I could fix it, but IE7's customization is pretty much half-assed.
Optional menu bar? What are they smoking?
Can't move the menu bar to the top where it belongs without a registry edit?
Can't remove the Search box without a registry edit?
Can't drag pieces up into the top bar?
The Home button is now broken away from the Back/Forward/Reload/Stop buttons?
Can't edit or move the top bar?
Star (Favorites) buttons I can't remove?
Again, I wouldn't mind it if I were allowed to reset them all and move and disable what I want, but I don't see a way to make this look decent at all! :( I tend to be as mimimalist as possible with my browser, while still being functional. Small top bars, only 2 rows, and nothing that I don't otherwise use regularly. I'm a computer user and thus I am fine using hotkeys or Menu bar dropdowns for occasional stuff. For tabbed browsing and a bar of Links that I only use on a work system, I'll suffice with 3 rows of junk on the top. IE7 has me stuck with 4 at the moment.
And while I'm not against registry edits, it is obvious Microsoft did not intend for these options, and I dislike adjusting a corporate browser away from the standard settings.
by michael 03.05.09 at 8:42 AM in /general
posted an excellent question via Twitter today. Twitter promptly decided to poop out on me...but even so, I thought it a question worthy of blogging about.
mubixPolling the audience (serious answers please). If you could get your boss to understand one security concept fully, what would it be?
Take a few moments to think about that one. Grab a stess ball, sit back and sip some coffee, whatever it is you do when absorbing something, but just take a moment to think.
Lots of things come to mind. Trust no one! Audit and change management! Patch! Hire, retain, and train competent staff to do the heavy thinking! You can never have too much information (just bad consumption of it). Support the business securely.
I finally posted
back the following:
@mubix Hard question, and worthy of a blog post. I'd say "You *will* have a security incident. Plan for it and plan to find it."
I was hoping for something more profound like, "Wax on, wax off," that would encapsulate a whole zen-like frame of mind where all security pieces fall into place. Alas, this was my contribution. At least I feel it states one of our fundamental laws of security, and sets the tone to properly detect, monitor, check, audit, and response to incidents.
by michael 03.06.09 at 11:23 AM in /general
I do Get It that IT needs to align with business. But that doesn't mean I think everything is then rosy in the house and all the puppies are happy. It's an easy thing to say, but a hard thing to adhere to (or easy, if you like statistics and can twist anything into a business value-add!).
My boss' boss recently related a story about a VP who was tasked with turning around a company that had the right technology but the wrong business strategy. This included constantly evaluating whether the technology (and projects) is serving the strategy of the business.
That's great, but to me that reinforces the idea that you only do enough in IT to accomplish the job, and that's it. You let the rest languish and most likely don't do any housekeepping. Housekeepping includes things that make security work: logging, alerts, detections, testing to make sure things you put up 6 months ago still work, audit settings, patches and updates (that don't add any new features you care about), etc.
Yes, that is a way to go. For example you don't need absolutely spotless event logs on your Windows servers. But that also is a way to foster a completely reactionary culture in regards to existing technology. I think that approach works more for new technologies and projects.
It just means that someone has to value security and housekeepping. And I'll always go back to the idea that so few people value personal security in their lack of security measures for their own home, let alone for the business they own, until they suffer for it. It's like finding your God only when you're deeply fearing your own mortality (or feeling excessively guilty about something and need an explanation).
by michael 03.06.09 at 1:39 PM in /general
by michael 03.06.09 at 2:03 PM in /general
This is pretty slick from the Attack Research
site. You can use Meterpreter plus a memory dumping tool to remotely dump memory and pull passwd hashes from it. I see Bejtlich
also posted about this. Part 1
and Part 2
by michael 03.09.09 at 10:39 PM in /general
Thanks to Douglas Haider
for posting that MetaGeek is offering a "turn-in/upgrade
" deal for Wi-Spy users. If you send in your ol Wi-Spy, you'll get a discount on a newer one. That's not bad, especially since I got mine back when they were still $99 I think.
However, I don't use it nearly as much as other things I could buy at that price point (yay opportunity costs!).
Upgrading to the Wi-Spy 2.4x is $200, and might be worth it. Paying $400 to get a DBx which really just adds 5Ghz monitoring, might be a bit of a stretch, especially since I have yet to encounter a wireless LAN operating in that range. A full product comparison is available
So, a total of about $300 for a device like this is right about the borderline for me. It is nice and really awesome to have when you have a use for it, but otherwise tends to sit around doing nothing special. If the device itself were just a bit cheaper, I would consider this a no-brainer. Even still, I'm really considering the 2.4x upgrade...
by michael 03.11.09 at 12:54 PM in /general
Mubix recently posted a how-to on OzymanDNS
; basically how to create an SSH tunnel over DNS.
And he has now posted thoughts on whether tutorials like this are unethical
(or the situation he is in himself as a Hak5 host and mentioning how this can circumvent hotspot captive portals). I highly suggest reading both posts as he makes great points (and I love me simple tutorials).
My position is probably fairly easy. I'm fully in favor of such tutorials, but I do appreciate, and add to any advice I give, any information on whether something is potentially illegal or something you can get fired over if you do it at work. Sure, I hate those "only for educational purposes" blurbs in almost every 2600 article as much as anyone (know your audience!), but they are useful when someone truly doesn't think about those consequences.
Sure, some teens watching Hak5 might turn into tomorrow's black hats, but they may also turn into tomorrow's security geeks because of the information they received in pushing systems to and beyond their limits, or challenging controls that are not fully secure, or simply trying out something new that sparks new ideas.
I appreciate that Mubix thought it over, and I do as well whenever I give advice. However, if we don't toe the line on ethics of that nature, we'll continue down the road of not sharing enough information, which I believe harms our collective knowledge and security.
by michael 03.11.09 at 1:23 PM in /general
While I'm not all caught up reading blogs, I hadn't seen this yet so thought I'd share. This report from Tech Crunch of a Google Docs issue where some docs were shared out beyond the intention of the authors
can be filed under, "Illustration of why you lose security control when using other people's services." While this issue may not have been leveraged by anyone, it is just one in the inevitable series of issues such services will create, especially as they want to break into the enterprise markets.
In a privacy error that underscores some of the biggest problems surrounding cloud-based services, Google has sent a notice to a number of users of its Document and Spreadsheets products stating that it may have inadvertently shared some of their documents with contacts who were never granted access to them.
It is not my choice to confuse cloud computing with Web 2.0 in the linked post...
One thing I'd love to see: Your exec team hosts and shares highly confidential docs on Google that detail an upcoming, confidential takeover. Google decides to start serving your exec team ads on Google AdSense or search results pages plastering up the name of your competitor you're intending to purchase. Or takeover mediators...
by michael 03.11.09 at 3:16 PM in /general
This is a complicated issue and may only make sense to me, but I'd like to document for future reference. I'll try to simplify as much as possible to stick to the crux of the matter: remotely executing powershell scripts from powershell scripts.
Pretend I have 3 web servers. On each server a powershell maintenance script perpetually runs (infinite loop). If I have a new web site to build, I edit a text file on a network folder. The maintenance scripts see this and execute a "createsite" script. Sometimes, due to down time if IIS needs to be stopped, I need these scripts to run in an orderly fashion. So one maintenance script is always a "master" of the others.
I've finally gotten sick of having a perpetual script running on each server (using resources and requiring an interactive login). What I want is one server which coordinates the execution of all my other little task scripts on the 3 web servers. Yup, I need to figure out remote execution!
Yes, Powershell v2 has decent remoting capabilities, but I can't effectively leverage them quickly. We still use IIS6, my web servers have Powershell v1, and there is a lot of rewrite time for the scripts to update them properly. Instead, I'd like to quickly get going with this architecture with as little effort as possible.
I'll use psexec and Powershell.
First, I need to make sure the account that Powershell will run under has a profile file set up. All of my scripts run out of d:\setup\scripts. If I want to start a remote powershell under a user and be able to relatively reference other scripts inside my first one, I need that user's profile to start in d:\setup\scripts.
Create the file profile.ps1 in ..\documents and settings\script user\my documents\WindowsPowerShell\. The contents:
This is the call I do on another server with an account that I designate as my web installer:
./psexec \\WEBSERVER1 -u DOMAIN\USER -p 'PASSWORD' /accepteula cmd /c "echo . | powershell -noninteractive -command `"& 'd:\setup\scripts\createsites.ps1'`""
Whoa, wait, what's the "echo . |" thing in there? That allows me to see the progress of my script, and properly lets psexec work on the target machine so my calling script can continue on with life. I found that just calling a powershell instance led to powershell/psexec never executing properly.
Did I need that -u and -p declared? Strangely, I did, even though the script was running as that user. If this wasn't declared, I don't think the Powershell profile was loading properly.
Why don't I use functions in my maintenance script instead of a separate script for my major tasks? I have many other pieces beyond a "createsites" task, some of which I call separately anyway. I'd much rather manage smaller scripts than one large beast of one. I'm not a software developer. :)
Why not use Task Scheduler? Let's just say I don't want Task Scheduler running on production web servers. And I want all my web servers to be managed the same.
by michael 03.12.09 at 11:21 AM in /general
Speaking of ethics, the BBC decided to do some of its own hacking
for a show, Click.
The technology show Click acquired a network of 22,000 hijacked computers - known as a botnet - and ordered the infected machines to send out spam messages to test email addresses and attack a website, with permission, by bombarding it with requests.
Click also modified the infected computers' desktop wallpaper.
If the BBC doesn't get hurt by this, the lesson we can all learn is: Make sure you have a hobby in journalism/reporting/television. That way next time you get caught cracking into something, you can just say it is for research and part of a report you're doing. Then we can all laugh and share a pint because it's all good then!
Oh, and next time I accelerate my fist into your abdomen, just let it be known it was without criminal intent. Over and over. Maybe I'll laugh during that too, to show I have no ill will in it.
by michael 03.13.09 at 10:03 AM in /general
I didn't expect to be quite as entertained by this story as I was. I apologize for not knowing where I got linked to this, but CSOOnline has the first part of a two-part story on how a company that suffered a data breach did everything wrong
. These are the sorts of stories that need to be told. Repeatedly. I don't care if authors are anonymous and specific details scrubbing to protect the guilty and victimized. But this sort of stuff shares details, and that's what we continue to need. We need it to learn from, and we need them to show others tangible illustrations of the risk.
...They lacked the equipment to detect a breach and, even if they did, lacked the human resources to monitor such equipment. He told us his staff consists of one full-time employee and one half-time assistant who is shared with the help desk... [ed.: a company of 10,000 users, 127 sites...]
"What logs? Remember that each business unit is different, but here at corporate we don't have logs. In fact, logging was turned off by the help desk because they got tired of responding to false alarms. Help desk reports to the IT director, not to security.
Everything starts with a basic policy from senior management that says security is important. From there flows talented staff who aren't going to just disable pesky alerts or be pulled in the IT operations/support direction 100% of the time. And so on...
by michael 03.13.09 at 11:10 AM in /general
SANS has published a story on an attack
that bypassed a .NET/ASP web front end and poked a local escalation. The tools mentioned can be found: Churrasco
(has the full description), Churrasco2
(updated for win2008), and ASPXSpy
(.NET webshell). Note that McAfee AV does detect the file aspxspy.aspx as naughty.
...developers wonder why I don't let their apps write locally...or publish directly since my replication removes rogue files automagically...
by michael 03.13.09 at 1:19 PM in /general
Mubix has posted his summary on things we wish our managers would learn
, which I commented about the other day
The #10 entry was about company buy-in and had only 1 vote, but I wonder if that single issue may drive a majority of the rest of the problems. It might not be that our managers don't get these topics, but they may be in the same boat as we are in feeling unsatiated with current results.
If there is any bias, it might come from how we read the question and how far up the chain our manager is. If my manager were the CTO/CSO/CEO I think I would answer more along the lines of #10. Maybe a good question would be, "what one concept would you want your company leaders to understand?" That would probably limit those technical responses and probably broaden the basic concepts part?
Or maybe what would be your security-related mission statement (and maybe a few supporting statements in case you think of mission statements as "make the world a better place") for your company?
by michael 03.18.09 at 1:28 PM in /general
[Update 3/19/09: I'm cleaning out some unfinished posts that I didn't want to lose, so I'm just publishing them as is. This post was written nearly as year ago.]
update: Odd, there was just talk about this, maybe I was influenced in a round-about way by this discussion at slashdot: Should Users Manage Their Own PCs?
(read the comments!)
Also more here.
There is increasing talk about worker angst with IT teams locking down computers and being dictators when it comes to adding software their computers. Thin clients and terminals are suddenly becoming sexy again. Likewise, most office workers seem to have their own array of gadgets and devices that they want to use, IT policies be-damned.
Rather than tackle that debate which swings both ways, I want to play devil's advocate and assume the direction is going to be taken where employees have full rights on their own fat systems. Let's say I work at an SMB that values employee happiness and creativity (software shop, video game shop, design group, etc). And the decision has been made that employees are responsible for the software on their own systems, although the company itself may front the cost of any needed software; pirating is not allowed.
What does this mean to security of that organization? I know plenty of security geeks will go into immediate defensive mode, but I'd rather delve into what approaches are needed in such a situation.
The assumptions and setting:
- Users have administrative rights to their systems.
- IT also has administrative rights.
- Users won't install pirated or illegal software, but instead get comped by the org.
- Servers are still the realm of the IT teams, so let's just not think about them for now.
What are some issues that can arise in such an environment?
- Systems may slow to a crawl as they become infected with crap upon crap.
- Internal and external networks may slow to a crawl or becoming unusable due to worms, viruses, scanners, bots; both internal-only congestion and externally targeted congestion.
- Information may quickly get stolen, ala the program that installed and steals your aim/wow/bank account and password, either actively or triggered or keylogged.
- IT may have to answer questions and provide support for non-standard programs across a huge range of possibilities.
- Users may install tools that have malicious side effects, especially if they have a laptop that goes home. Things like BitTorrent and p2p apps tend to pop up on such systems.
- Most systems will have one or several IM programs installed and in use, opening the user to phishing/spam, an potential avenue to send information beyond the corporate garden, and lost productivity if abused.
- Users will use their personal webmail accounts, opening up the same avenues.
- Any type of development or creation processes may not be possible to move from the user's computer to a server. "You want *what* installed on the web server?!"
And here are some measures to pursue. These are not in any specific order.
- A strong perimeter with aggressive ingress and egress rulesets with active logging on egress blocks. Yes, many apps will just tunnel through port 80, but that doesn't mean we should forget the floodgates.
- Strong internal perimeter to protect the DMZ and the suddenly rather untrusted internal LANs. Isolate print servers, file servers, and others from userland, letting only what is absolutely necessary past.
- Strong internal network monitoring to identify traffic congestion and unwanted communication attempts.
- The staff to attend to the alerts this stronger network posture will require. With such an untrusted userland network, bad alerts can't sit for very long, and there may be plenty of them.
- Consistent and regular user training about security concepts.
- Regular communication amongst employees and IT about how to properly solve various problems, use programs more intelligently, and so on. If one program can solve problems but everyone is just using what they know, perhaps opening communication may get everyone on a standard page. It certainly is better than everyone trying the same 10 programs to solve the same problem. [update: I'm not sure what I was saying here...]
- Foster an open environment where users can talk candidly with IT and security, without expecting laughter or a quick rebuke.
This is going to be much like the TSA assuming every passenger is a threat.
- Will need an aggressive and automatic patching solution to keep the OS and major applications patched as much as possible.
- Have a strong imaging solution and architecture in place. People mess up their computers now and then and require them to be re-imaged. People who control their own computers will mess them up even more.
- Have strong network and file server anti-virus or malware scanning. Chances are pretty good that users will store their backup installs on your file server. Try to separate the screensaver crapware from the necessary stuff.
- Be proactive in supporting the software inventory needs of your users. If a user has a piece of software they had the company purchase, keep an inventory or even a backup of the install disk and serial under lock and key. This is far better than letting users manage (or steal! or lose!) their own copies. A photoshop disc left on a desk is a pretty easy crime of opportunity.
- Plan to have strong remote management of user's systems, especially when it comes to inventorying various things, such as accounts, installed software, running processes, resource consumption, log gathering. You likely won't parse these out regularly, but some you might want alerts for, such as new user accounts appearing.
- Proactively offer to assist users with any PC questions they may have. Often, users have lots of little annoyances they live with, but offering to help with the fixable ones can often go a long way towards satisfaction not just with IT but their job as well. If a system is running slow or they don't understand why a window displays as it does, assist them with fixing it.
- When assisting users, take extra effort to include willing users in your troubleshooting. This not only opens lines of communication, but also teaches them as you go. Maybe next time they'll already have checked for that rogue process before you get to their desk!
- Might be wise to evaluate DLP technologies. While administrative rights for users on their desktop means many forms of malware will do things like disable AV before it can interject, many users are not nearly as sophisticated when they purposely or accidentally move important data from the safety of the corporate environment to an outside entity. It might be enough to implement DLP to stop all but the truly crafty and determined insiders. That might be risk avoidance enough to deal with the determined ones on a case by case basis.
Sadly, the reality is a company that likely wants to have local administrative rights is likely too small to meet the needs listed above without some assistance.
by michael 03.19.09 at 8:28 AM in /general
[Update 3/19/09: I'm cleaning out some unfinished posts that I didn't want to lose, so I'm just publishing them as is. This post is a bit of a rant from summer 2008, but I feel I wanted to make some points about how IT may talk all pretty about 'aligning with business' but really we're probably always going to be stuck in some 'silo' of some fashion no matter what. Also, entities are simply not doing the simple security things correctly. This compounds the 'silo' problem... I wonder if it would help if 'business aligned with security?']
Talking in our team meeting this morning at work, and it became a bit of a cynical day to start out. That is one thing about being in IT and being security-conscious (or being in security)...you can become cynical and negative extremely quickly, and often. At least for many of us, we keep the venting in the back rooms.
We were talking about some of the breaches that have been occurring in recent years and how they are still only slowly pushing proper security measures. Interestingly, it seems that most, if not all, of the media-covered breaches are the result of stupidity on the part of users, or very simple mistakes on the part of the victim company or person. Perhaps really talented hackers are not getting caught and maybe a lot of those more subtle attacks are being buried in corporate bureacracy and fear, but I truly think most of the incidents are borne out of mistakes or opportunity for the attacker.
This means that a depressing number of these were preventable. And a depressing number of these make us corporate goons highly frustrated because we talk and talk and demonstrate and warn about the same issues. Not much of this stuff is new to those of us with half common sense.
Ask your employees who is responsible for data security, and I would be willing to bet that half or more will say IT. Another small slice will act smart and say everyone, but they're just supplying the right answer without really believing or living it. Very few will answer and truly believe that it lies with everyone. So that puts the burden on IT, for the most part.
Companies complain when we work in a silo, vacuum, or do things on our own that affect their job without other people's input, no matter how inane or useless that input may be. Which is weird, since we are supposed to do things on our own, like, you know, security.
We can often complain about lack of action or preventative planning in the upper ranks of a corporation. "It won't happen to us," is a common refrain, whether explicitly spoken or implicitly implied (I wonder if you can explicitly imply something...). But one that really annoys me is the statement, "We already have adequate security." I really hate that, especially when you ask the IT guys if we have adequate security and we immediately either give an "I-know-better" smirk or we look suspicious wondering what politico-business trap we're about to fall into based on our response. Top-down, there is a gap where eventually a C-level just doesn't know the nuts and bolts and lives in their own little reality. Not all of them, but that is a very easy cloud to fall into, especially if they feel they should be a leader by example and trust their employees without validating that trust with nothing more than, "it's never happened yet!"
by michael 03.19.09 at 9:07 AM in /general
I missed G. Mark Hardy's talk at Defcon titled "A Hacker Looks at 50," but I always earmarked it to check it out. I'm glad I did since he has a lot of great wisdom to share. I wanted to yoink his main slide bullet points just to reinforce it to myself. His talk is available online (mp4).
Here are G Mark's Observations on Life:
- Just ask.
- Don't wait for perfection.
- Become a master.
- Vision is everything.
- Never disqualify yourself.
- Challenge your limitations.
- Have a vision. Write it down.
- Speak every chance you get.
- Don't go it alone.
- Be flexible.
- Aim high.
- Be PASSIONATE.
- Beware of bright shiny objects.
- Choose tech or management.
- Do something bigger than yourself.
- Recipe for life:
- plan (take control back, take a break in the woods)
- take risk (you can always go back)
- stay focused (TTL)
- determination (how badly do you want it?)
- Don't save your best for last.
- Be generous now. (Our stuff doesn't follow us.)
- Enjoy life.
by michael 03.23.09 at 11:27 AM in /general
Clouds. Ugh. I'm still trying to slowly make sense of what the cloud is, but it doesn't help that pretty much everything is being rebranded as 'cloud.' Once upon a time I thought cloud computing was sort of like off-loading massive computing needs to someone else (a lot like SETI only more commercial, or maybe more like botnet time purchasing?), I now may think 'cloud' refers to anything you use that isn't in your pocket or on your desk. So does this mean Web 2.0 is officially passe and 'Cloud' is the new Web 3.0?
Nonetheless, some thoughts which likely illustrate why I'm not getting it...
- If an enterprise isn't doing their IT infrastructure correctly already, they alone can't evaluate which cloud vendors *are* doing it correctly.
- Cloud vendors aren't doing anything magical that makes them far better than your own infrastructure.
- And if the 'cloud' fucks up, you can just blame them, right?
- At least you can see into your own operations. You can't see the cloud ops. And at least your operations can care about your business.
- Cloud companies want to make money too. Which means rather than paying contractors to make your solutions, you're paying another enterprise to create your solutions. So, what are you really buying by probably spending more? (answer: experience and blame shift, and experience is often what enterprises are avoiding paying for in their own staff.)
- Cloud, in my view, yields value in: 1) experience through repeating solutions, 2) internal scalability through repeating solutions, 3) and internal efficiency through repeating solutions. If you can provide solution A for company Y, you should limit costs by basically providing solution A` to company C, right?
- Cloud is basically a new brand for the software market, the web market, or an IT data-churning service (B2B service?). Absolutely nothing new, so pick your poison.
- While basic computing needs for enterprises are very similar, it only takes a few weeks of work to make their environments terribly dissimilar. This digs at the value any repeat solutions will have for different businesses. Something the service industry has to deal with by stacking experience, rather than pre-packaged products. Any developer creating solutions for multiple businesses could attest to this, I'm sure.
- And if cloud is a service, then it will always be pressured to squeeze 10 clients into the space where 6 quality-driven clients would exist. (*wave to Jerry Maguire*)
by michael 03.24.09 at 10:34 AM in /general
Jeremiah Grossman dives into the question
, Why isn't more money being spent on Application security when it is obviously important today?
During an event a panel of Gartner Analysts asked the audience what the best way is for organization to invest $1 million dollars in effort to reduce risk. The choices were Network, Host, or Application security... The audience selected Application security. However, the Gartner CSO (who took the role of CIO in the play) overruled the audiences' decision. They instead selected Network security, while at the same time curiously agreeing that Application security would have been the better path. His rational was that that it is easier for him to show results to his CEO if he invests in the Network.
He has a point!
I also believe it has to do with visibility and knowledge. We've had networking and systems around for quite some time, and we're getting better at operationally baking in and showing security. I don't think we're nearly as mature with application security. Unless someone codes, they really just don't get it because it is hard to visualize and measure.
There is also an experience or knowledge gap where, again unless you're a developer, you really can't effectively explain or demonstrate security or how to code securely. I've seen "senior" developers who have zero thought about security other than on a most basic level (i.e. "sure we have admin and normal user types in the system...").
The rest of Jeremiah's article is also excellent reading. I love his point about the immediacy of results. That's a frustrating business mindset for technical problem solvers.
Maybe that gets into the realm where the business needs to start working with IT
, as opposed to *only* saying IT needs to align with business
by michael 03.25.09 at 2:38 PM in /general
I've been getting behind on too many blogs these days, but this morning I was catching up with posts on the Security Catalyst site
and have been impressed with the myriad contributors posting useful and dijestable articles. Nice!
One in particular by Adam Dodge
reinforces something I've been trying to learn these last few years (and is also referenced in A Hacker Looks at 50 presentation
). In essence: don't be afraid to fail; don't be afraid to be wrong; don't be afraid to be 'not perfect.'
I've seen this in many ways, in books for tech geeks, posts on blogs, and even leadership/CEO books. I've even experienced it because, let's face it, we learn the most when we fail (or for us geeks, we learn the most when we're troubleshooting). Waiting for perfection is inaction. We even learn this in relationships, the power of admitting to being wrong.
But damn is that paradigm hard to learn when we're implicitly taught from childhood to adulthood in the workplace that you have to be right and it is bad to be wrong. Even topics I know I know very little about seem to have this urge to present oneself as knowledgable (such as nodding along with the service mechanic explaining what is wrong with my car!).
So it's been a sort of quiet goal of mine to be wrong a bit more often, and ask more questions, even seemingly simple ones, just to allow me to understand things better. And rather than sit inactive waiting for knowledge on a topic like implementing a new system/tool, just do it and be ready to be wrong.
Kinda like being ready for the inevitable security incident, eh?
I could even bring this around back to gaming. In order to be a good player, you have to take those small steps where you bumble around a map, try to learn the buttons, and figure out tactics. You'll take those 0-20 lumps. Or in an MMO you can't just wait around to raid only when you have full knowledge, but you have to get in there and make your mistakes those first few times. It is strange that these simple concepts become demons in a workplace.
by michael 03.26.09 at 8:31 AM in /general
Pirates or ninjas? Vi or emacs? Such great debates...so why not combine them into one? They do over at philosecurity!
by michael 03.26.09 at 10:51 AM in /general
I've been doing a little reading today, since it feels like Friday around here, and came across an article about space storms possibly creating disaster situations over large swaths of the US
. This is due to our heavy reliance on the power grid for, well, pretty much everything.
The second problem is the grid's interdependence with the systems that support our lives: water and sewage treatment, supermarket delivery infrastructures, power station controls, financial markets and many others all rely on electricity... "It's just the opposite of how we usually think of natural disasters," says John Kappenman... "Usually the less developed regions of the world are most vulnerable, not the highly sophisticated technological regions."
Taking this down a bit into the IT infrastructure
, this reminds me how we can become dependent on our own infrastructure to do common or even uncommon tasks. Web interfaces in a power outage or misconfiguration will be down. Do you know how to expediently console into your devices? Can you work on a command line? Do you have the documentation on how your scripts operate so you could do it manually in an emergency? Could you interpret tcpdump output if your network is being crippled by a worm, preventing IDS use?
Some of this comes down to something I believe in: the simple fundamentals
. Tools are great to make us more efficient, but at the end of the day good IT persons are not defined by their GUIs. They are defined much like good ol' Unix tools: how well they can use the simplest building blocks to get their tasks done. And how they can creatively chain those simple tools together to do fabulous things.
This also goes into security
. We are not defined by the automated tools we use (those that are are script kiddies), but rather whether we understand how those tools work and could emulate similar behavior using the basics if need be.
Further we can expand this into our virtual infrastructure
. If the host goes down, or hell, even just your virtual center client box, are you dead in the water? Would you be able to stand up a (*shiver!*) physical web server quick and get critical apps working while the host is being operated on?
Finally, this does echo an aspect to one of the simple security maxims that I believe was quoted or made popular by Schneier or Geer: "Complex systems fail complexly."
by michael 03.26.09 at 1:54 PM in /general
I missed this bit of news that Twitter accounts that have SMS texting turned on may have been hijackable for quite some time
(I'm beginning to think Krebs is one of the only truly successful security journalists around!). Provided you know the mobile number someone has activated to be allowed to post Twitter messages, and you're coming from an international location. Read the article for the details.
More disturbing is the tone of dismissal and lack of creative thinking from Twitter in regards to this issue. Sure they had a fix, but they certainly didn't grasp the full issue.
In essence, we're rolling new tech (and ways tech can interact with other tech) out faster than we can properly manage it. Then again...that's nothing new, now, is it?
by michael 03.26.09 at 4:40 PM in /general
LinuxHaxor.net has posted 10 Twitter clients for Linux
. I've not used any of them; in fact, I've not used any Twitter clients so far. I Twitter from work (web) or through my phone. But I know that I'll only get the most use out of Twitter if I can be less disjointed in my following and participation, and see twits as they get posted by the people I follow. And a nice way to scroll back over the last x hours I've missed (props to recent [this week] interface changes that improve this!) That will all require a Twitter client. So, someday sooner than later I'll be trying these out and wanted to file away the link.
by michael 03.27.09 at 9:48 AM in /general
A quick pointer to an excellent article by Gunnar Peterson talking about his "He Got Game" rule
. In short, you gotta have game with coding if you want to tackle securing code. This runs parallel to my thinking that you have to know how to code before you can know how to secure your code. Adrian Lane adds an excellent comment as well, at least from what I pulled from it (something about it's wording made me need to read it 5 times...)
I'll state there are always exceptions, but I'd say those exceptions are not the norm at all. At least you can say if someone is technical in one area, they *could
* have a small headstart in tackling another technical area. In the end, just like having a security mindset is a huge
help for a security professional, having an aptitude and experience in coding is a huge
help for a dev security pro.
I could simply be failing by generalizing way too much. :)
The difference in all of this to me is: TRAINING/PRACTICE
. Whether it is self-prescribed or work-prescribed, training makes a difference.
As far as his book recommendation, I have no idea about it, but I'd be willing to give it a flip-thru to see if I could grasp it and benefit from it.
* The older I get and the farther away I get from the analog world, the more I wonder how the hell we used to write and add emphasis without markup tages or non-standard type (**, bold, italics, all-caps...) Then again, without computers, thinking about what job I would be working now leaves me blank too...
by michael 03.27.09 at 2:09 PM in /general
I'm obviously catching up on some blogs on a rather nicely lazy Friday. Over at Teneble, they have a repost of Marcus Ranum's recent keynote at SOURCE Boston, Anatomy of The Security Disaster
. This is a long read, but exceedingly well worth it. I apologize for not looking too hard for a posted video.
So, what’s going on? We’ve finally managed to get security on the road-map for many major organizations, thanks to initiatives like PCI and some of the government IT audit standards. But is that true? Was it PCI that got security its current place at the table, or was it Heartland Data, ChoicePoint, TJX, and the Social Security Administration? This is a serious, and important, question because the answer tells us a lot about whether or not the effort is ultimately going to be successful. If we are fixing things only in response to failure, we can look forward to an unending litany of failures, whereas if we are improving things in advance of problems, we are building an infrastructure that is designed to last beyond our immediate needs.
by michael 03.27.09 at 3:46 PM in /general
Dan Kaminsky released some information this morning that it is possible to remotely (and anonymously) detect if Conficker has owned a system
. He does link to a POC scanner (python). This is the result of some work by Tillmann Werner and Felix Leder of the Honeynet Project. Looking forward to the paper!
Update: Here is more information about Conficker
compiled by the handler's at the SANS diary. I haven't personally paid much attention to Conficker recently, mostly because we appear to be fully patched on known, managed systems where I work, so it has been a non-issue since Microsoft released them (MS08-067)
. That and it was pretty obvious the issue at hand was wormable and would be important.
by michael 03.30.09 at 9:59 AM in /general
FOR IMMEDIATE RELEASE
Terminal23.net is proud to announce their offering of cloud computing services to the general public. Terminal23.net will immediately begin offering blog, news, and commenting services to all customers through its stable and scalable cloud computing architecture. As a visitor to our service, the more you click around, the more our system recognizes this and provisions computing resources to serve your news needs. In addition, customers do not have to worry about the complexity of the underlying technology!
Terminal23.net is also proud to align itself with the Open Cloud Manifesto
- We are dedicated to working with other cloud computing providers to offer address the challenges of adoption our service may have, and to support ongoing standards. We have started by using common blog software, and a common layout of post title, body, date, and even comment services!
- At no time will we lock our customers into using only our service. Feel free to read other blogs, too!
- We will work diligently to align ourselves with existing standards wherever possible.
- We will also be aware that needs for new standards will be met through collaboration rather than individual standard provisioning.
- We will be committed to working with the community, not to further our own technical needs, but rather in response to customer needs.
- We will...hell..these all sound the same anyway, so we just meet the last principle too!
Terminal23.net is excited for the future and our offering this new service to the public. This is a new chapter in our organization!
by michael 03.31.09 at 8:15 AM in /general
BlackHat USA 2008 videos
have been posted!
Note: Hrm, seems I forgot to hit the Publish button 3 hours ago...
by michael 03.31.09 at 8:46 AM in /general
As if the state of PCI wasn't confusing enough, here is a piece from ComputerWorld
that basically makes my head explode:
A Gartner Inc. analyst is urging companies that do business with Heartland Payment Systems Inc. and RBS WorldPay Inc. not to switch to other payment processors just because of Visa Inc.'s decision this month to remove Heartland and RBS WorldPay from its list of service providers that are compliant with the PCI data security rules.
and later this:
Visa requires all entities that accept credit and debit cards issued under its name to work only with service providers that comply with the PCI rules, which are formally known as the Payment Card Industry Data Security Standard (PCI DSS).
But in a research bulletin issued yesterday (download PDF), Gartner analyst Avivah Litan said that customers can continue to utilize Heartland and RBS WorldPay without facing any fines from Visa.
My first reaction is, "So why the hell does PCI (or the PCI certified listing) matter?" Yes, I understand companies and people make mistakes and honestly this may not be reason to jump ship from an entity, but this certainly questions the relevance of PCI listings.
Well, we'll make an exception to our own rules saying you need to work only with service providers that are certified?
They're going to be recertified so stick with it for a bit? Are you sure? And what if they lapse at "a point in time" again?
PCI was not at fault because while HPS was certified at a point in time, it did not maintain that certification at every point in time? (Wow, that could be the infinitely defensible weasel-out card!)
By the way, their delisting is just a point in time thing, just wait?
So, we have this PCI certified listing that PCI itself wants you to adhere to, but if someone drops off, don't worry about it because they'll recover. Is there *any* reason left to worry about someone not appearing on that list or being delisted? Which is worse?
And I like the irony (?) of another recommendation in the same Gartner report
All parties that handle cardholder data: Focus on maintaining continuous cardholder data security, rather than on achieving PCI-compliant status.
No shit? But isn't that the "do it yourself all the time" attitude what keeps/kept us in a mediocre state in the first place?! It obviously does not work broadly, so we need a kick in the junk by something with steel toes. But do we really need limp steel toes too?
by michael 03.31.09 at 3:21 PM in /general
Need to comply with PCI? Whether you have wireless devices or not, you do need to scan and make sure you don't have any popping up. This SPSP report
goes into detail on this subject.
My biggest concern was the mention that using Netstumbler or Kismet to discover rogue access points is sufficient. I agree, but only if you're constantly analyzing the results, i.e. not just doing a walk-through every quarter, month, or week, but rather have a dedicated system always looking. Not some point-in-time crap.
Why? Because an idle SSID-hiding AP will still be invisible to Netstumbler and Kismet (even a chatty SSID-hiding AP will hide from Netstumbler!). You need to capture even the small window where a wireless AP is talking.
By the way, I'm hoping some answers to EthicalHacker.net's latest challenge
will not only answer the second question (How were the kids able to access Greg's rogue access point even though it was not detected during Mr. Phillips PCI compliance assessment?
), but also explain how to detect a rogue wireless device that isn't talking at the moment. I wasn't sure if that is possible short of brute-forcing an SSID response or trying to get the AP to talk from wired to wireless somehow...
by michael 03.31.09 at 4:40 PM in /general