linuxhaxor.net: 10 twitter clients

LinuxHaxor.net has posted 10 Twitter clients for Linux. I’ve not used any of them; in fact, I’ve not used any Twitter clients so far. I Twitter from work (web) or through my phone. But I know that I’ll only get the most use out of Twitter if I can be less disjointed in my following and participation, and see twits as they get posted by the people I follow. And a nice way to scroll back over the last x hours I’ve missed (props to recent [this week] interface changes that improve this!) That will all require a Twitter client. So, someday sooner than later I’ll be trying these out and wanted to file away the link.

did you know how easy it was to hijack twitter via sms?

I missed this bit of news that Twitter accounts that have SMS texting turned on may have been hijackable for quite some time (I’m beginning to think Krebs is one of the only truly successful security journalists around!). Provided you know the mobile number someone has activated to be allowed to post Twitter messages, and you’re coming from an international location. Read the article for the details.

More disturbing is the tone of dismissal and lack of creative thinking from Twitter in regards to this issue. Sure they had a fix, but they certainly didn’t grasp the full issue.

In essence, we’re rolling new tech (and ways tech can interact with other tech) out faster than we can properly manage it. Then again…that’s nothing new, now, is it?

the danger of abstracting too far from the basics

I’ve been doing a little reading today, since it feels like Friday around here, and came across an article about space storms possibly creating disaster situations over large swaths of the US. This is due to our heavy reliance on the power grid for, well, pretty much everything.

The second problem is the grid’s interdependence with the systems that support our lives: water and sewage treatment, supermarket delivery infrastructures, power station controls, financial markets and many others all rely on electricity… “It’s just the opposite of how we usually think of natural disasters,” says John Kappenman… “Usually the less developed regions of the world are most vulnerable, not the highly sophisticated technological regions.”

Taking this down a bit into the IT infrastructure, this reminds me how we can become dependent on our own infrastructure to do common or even uncommon tasks. Web interfaces in a power outage or misconfiguration will be down. Do you know how to expediently console into your devices? Can you work on a command line? Do you have the documentation on how your scripts operate so you could do it manually in an emergency? Could you interpret tcpdump output if your network is being crippled by a worm, preventing IDS use?

Some of this comes down to something I believe in: the simple fundamentals. Tools are great to make us more efficient, but at the end of the day good IT persons are not defined by their GUIs. They are defined much like good ol’ Unix tools: how well they can use the simplest building blocks to get their tasks done. And how they can creatively chain those simple tools together to do fabulous things.

This also goes into security. We are not defined by the automated tools we use (those that are are script kiddies), but rather whether we understand how those tools work and could emulate similar behavior using the basics if need be.

Further we can expand this into our virtual infrastructure. If the host goes down, or hell, even just your virtual center client box, are you dead in the water? Would you be able to stand up a (*shiver!*) physical web server quick and get critical apps working while the host is being operated on?

Finally, this does echo an aspect to one of the simple security maxims that I believe was quoted or made popular by Schneier or Geer: “Complex systems fail complexly.”

on embracing failure

I’ve been getting behind on too many blogs these days, but this morning I was catching up with posts on the Security Catalyst site and have been impressed with the myriad contributors posting useful and dijestable articles. Nice!

One in particular by Adam Dodge reinforces something I’ve been trying to learn these last few years (and is also referenced in A Hacker Looks at 50 presentation). In essence: don’t be afraid to fail; don’t be afraid to be wrong; don’t be afraid to be ‘not perfect.’

I’ve seen this in many ways, in books for tech geeks, posts on blogs, and even leadership/CEO books. I’ve even experienced it because, let’s face it, we learn the most when we fail (or for us geeks, we learn the most when we’re troubleshooting). Waiting for perfection is inaction. We even learn this in relationships, the power of admitting to being wrong.

But damn is that paradigm hard to learn when we’re implicitly taught from childhood to adulthood in the workplace that you have to be right and it is bad to be wrong. Even topics I know I know very little about seem to have this urge to present oneself as knowledgable (such as nodding along with the service mechanic explaining what is wrong with my car!).

So it’s been a sort of quiet goal of mine to be wrong a bit more often, and ask more questions, even seemingly simple ones, just to allow me to understand things better. And rather than sit inactive waiting for knowledge on a topic like implementing a new system/tool, just do it and be ready to be wrong.

Kinda like being ready for the inevitable security incident, eh?

I could even bring this around back to gaming. In order to be a good player, you have to take those small steps where you bumble around a map, try to learn the buttons, and figure out tactics. You’ll take those 0-20 lumps. Or in an MMO you can’t just wait around to raid only when you have full knowledge, but you have to get in there and make your mistakes those first few times. It is strange that these simple concepts become demons in a workplace.

jeremiah on application security spending

Jeremiah Grossman dives into the question, Why isn’t more money being spent on Application security when it is obviously important today?

During an event a panel of Gartner Analysts asked the audience what the best way is for organization to invest $1 million dollars in effort to reduce risk. The choices were Network, Host, or Application security… The audience selected Application security. However, the Gartner CSO (who took the role of CIO in the play) overruled the audiences’ decision. They instead selected Network security, while at the same time curiously agreeing that Application security would have been the better path. His rational was that that it is easier for him to show results to his CEO if he invests in the Network.

He has a point!

I also believe it has to do with visibility and knowledge. We’ve had networking and systems around for quite some time, and we’re getting better at operationally baking in and showing security. I don’t think we’re nearly as mature with application security. Unless someone codes, they really just don’t get it because it is hard to visualize and measure.

There is also an experience or knowledge gap where, again unless you’re a developer, you really can’t effectively explain or demonstrate security or how to code securely. I’ve seen “senior” developers who have zero thought about security other than on a most basic level (i.e. “sure we have admin and normal user types in the system…”).

The rest of Jeremiah’s article is also excellent reading. I love his point about the immediacy of results. That’s a frustrating business mindset for technical problem solvers.

Maybe that gets into the realm where the business needs to start working with IT, as opposed to *only* saying IT needs to align with business.

set ourselves up to blame others

Clouds. Ugh. I’m still trying to slowly make sense of what the cloud is, but it doesn’t help that pretty much everything is being rebranded as ‘cloud.’ Once upon a time I thought cloud computing was sort of like off-loading massive computing needs to someone else (a lot like SETI only more commercial, or maybe more like botnet time purchasing?), I now may think ‘cloud’ refers to anything you use that isn’t in your pocket or on your desk. So does this mean Web 2.0 is officially passe and ‘Cloud’ is the new Web 3.0?

Nonetheless, some thoughts which likely illustrate why I’m not getting it…

– If an enterprise isn’t doing their IT infrastructure correctly already, they alone can’t evaluate which cloud vendors *are* doing it correctly.

– Cloud vendors aren’t doing anything magical that makes them far better than your own infrastructure.

– And if the ‘cloud’ fucks up, you can just blame them, right?

– At least you can see into your own operations. You can’t see the cloud ops. And at least your operations can care about your business.

– Cloud companies want to make money too. Which means rather than paying contractors to make your solutions, you’re paying another enterprise to create your solutions. So, what are you really buying by probably spending more? (answer: experience and blame shift, and experience is often what enterprises are avoiding paying for in their own staff.)

– Cloud, in my view, yields value in: 1) experience through repeating solutions, 2) internal scalability through repeating solutions, 3) and internal efficiency through repeating solutions. If you can provide solution A for company Y, you should limit costs by basically providing solution A` to company C, right?

– Cloud is basically a new brand for the software market, the web market, or an IT data-churning service (B2B service?). Absolutely nothing new, so pick your poison.

– While basic computing needs for enterprises are very similar, it only takes a few weeks of work to make their environments terribly dissimilar. This digs at the value any repeat solutions will have for different businesses. Something the service industry has to deal with by stacking experience, rather than pre-packaged products. Any developer creating solutions for multiple businesses could attest to this, I’m sure.

– And if cloud is a service, then it will always be pressured to squeeze 10 clients into the space where 6 quality-driven clients would exist. (*wave to Jerry Maguire*)

wisdom from a hacker looking at 50

I missed G. Mark Hardy’s talk at Defcon titled “A Hacker Looks at 50,” but I always earmarked it to check it out. I’m glad I did since he has a lot of great wisdom to share. I wanted to yoink his main slide bullet points just to reinforce it to myself. His talk is available online (mp4). Here are G Mark’s Observations on Life:

  • Just ask.
  • Don’t wait for perfection.
  • Become a master.
  • Vision is everything.
  • Never disqualify yourself.
  • Challenge your limitations.
  • Have a vision. Write it down.
  • Speak every chance you get.
  • Don’t go it alone.
  • Be flexible.
  • Aim high.
  • Be PASSIONATE.
  • Beware of bright shiny objects.
  • Choose tech or management.
  • Do something bigger than yourself.
  • Recipe for life:
    • vision
    • plan (take control back, take a break in the woods)
    • take risk (you can always go back)
    • stay focused (TTL)
    • determination (how badly do you want it?)
  • Don’t save your best for last.
  • Be generous now. (Our stuff doesn’t follow us.)
  • Enjoy life.

a general cynical moment

[Update 3/19/09: I’m cleaning out some unfinished posts that I didn’t want to lose, so I’m just publishing them as is. This post is a bit of a rant from summer 2008, but I feel I wanted to make some points about how IT may talk all pretty about ‘aligning with business’ but really we’re probably always going to be stuck in some ‘silo’ of some fashion no matter what. Also, entities are simply not doing the simple security things correctly. This compounds the ‘silo’ problem… I wonder if it would help if ‘business aligned with security?’]

Talking in our team meeting this morning at work, and it became a bit of a cynical day to start out. That is one thing about being in IT and being security-conscious (or being in security)…you can become cynical and negative extremely quickly, and often. At least for many of us, we keep the venting in the back rooms.

We were talking about some of the breaches that have been occurring in recent years and how they are still only slowly pushing proper security measures. Interestingly, it seems that most, if not all, of the media-covered breaches are the result of stupidity on the part of users, or very simple mistakes on the part of the victim company or person. Perhaps really talented hackers are not getting caught and maybe a lot of those more subtle attacks are being buried in corporate bureacracy and fear, but I truly think most of the incidents are borne out of mistakes or opportunity for the attacker.

This means that a depressing number of these were preventable. And a depressing number of these make us corporate goons highly frustrated because we talk and talk and demonstrate and warn about the same issues. Not much of this stuff is new to those of us with half common sense.

Ask your employees who is responsible for data security, and I would be willing to bet that half or more will say IT. Another small slice will act smart and say everyone, but they’re just supplying the right answer without really believing or living it. Very few will answer and truly believe that it lies with everyone. So that puts the burden on IT, for the most part.

Companies complain when we work in a silo, vacuum, or do things on our own that affect their job without other people’s input, no matter how inane or useless that input may be. Which is weird, since we are supposed to do things on our own, like, you know, security.

We can often complain about lack of action or preventative planning in the upper ranks of a corporation. “It won’t happen to us,” is a common refrain, whether explicitly spoken or implicitly implied (I wonder if you can explicitly imply something…). But one that really annoys me is the statement, “We already have adequate security.” I really hate that, especially when you ask the IT guys if we have adequate security and we immediately either give an “I-know-better” smirk or we look suspicious wondering what politico-business trap we’re about to fall into based on our response. Top-down, there is a gap where eventually a C-level just doesn’t know the nuts and bolts and lives in their own little reality. Not all of them, but that is a very easy cloud to fall into, especially if they feel they should be a leader by example and trust their employees without validating that trust with nothing more than, “it’s never happened yet!”

the lost battle for the desktop

[Update 3/19/09: I’m cleaning out some unfinished posts that I didn’t want to lose, so I’m just publishing them as is. This post was written nearly as year ago.]

update: Odd, there was just talk about this, maybe I was influenced in a round-about way by this discussion at slashdot: Should Users Manage Their Own PCs? (read the comments!)

Also more here.

There is increasing talk about worker angst with IT teams locking down computers and being dictators when it comes to adding software their computers. Thin clients and terminals are suddenly becoming sexy again. Likewise, most office workers seem to have their own array of gadgets and devices that they want to use, IT policies be-damned.

Rather than tackle that debate which swings both ways, I want to play devil’s advocate and assume the direction is going to be taken where employees have full rights on their own fat systems. Let’s say I work at an SMB that values employee happiness and creativity (software shop, video game shop, design group, etc). And the decision has been made that employees are responsible for the software on their own systems, although the company itself may front the cost of any needed software; pirating is not allowed.

What does this mean to security of that organization? I know plenty of security geeks will go into immediate defensive mode, but I’d rather delve into what approaches are needed in such a situation.

The assumptions and setting:

  • Users have administrative rights to their systems.
  • IT also has administrative rights.
  • Users won’t install pirated or illegal software, but instead get comped by the org.
  • Servers are still the realm of the IT teams, so let’s just not think about them for now.

What are some issues that can arise in such an environment?

  • Systems may slow to a crawl as they become infected with crap upon crap.
  • Internal and external networks may slow to a crawl or becoming unusable due to worms, viruses, scanners, bots; both internal-only congestion and externally targeted congestion.
  • Information may quickly get stolen, ala the program that installed and steals your aim/wow/bank account and password, either actively or triggered or keylogged.
  • IT may have to answer questions and provide support for non-standard programs across a huge range of possibilities.
  • Users may install tools that have malicious side effects, especially if they have a laptop that goes home. Things like BitTorrent and p2p apps tend to pop up on such systems.
  • Most systems will have one or several IM programs installed and in use, opening the user to phishing/spam, an potential avenue to send information beyond the corporate garden, and lost productivity if abused.
  • Users will use their personal webmail accounts, opening up the same avenues.
  • Any type of development or creation processes may not be possible to move from the user’s computer to a server. “You want *what* installed on the web server?!”

And here are some measures to pursue. These are not in any specific order.

  • A strong perimeter with aggressive ingress and egress rulesets with active logging on egress blocks. Yes, many apps will just tunnel through port 80, but that doesn’t mean we should forget the floodgates.
  • Strong internal perimeter to protect the DMZ and the suddenly rather untrusted internal LANs. Isolate print servers, file servers, and others from userland, letting only what is absolutely necessary past.
  • Strong internal network monitoring to identify traffic congestion and unwanted communication attempts.
  • The staff to attend to the alerts this stronger network posture will require. With such an untrusted userland network, bad alerts can’t sit for very long, and there may be plenty of them.
  • Consistent and regular user training about security concepts.
  • Regular communication amongst employees and IT about how to properly solve various problems, use programs more intelligently, and so on. If one program can solve problems but everyone is just using what they know, perhaps opening communication may get everyone on a standard page. It certainly is better than everyone trying the same 10 programs to solve the same problem. [update: I’m not sure what I was saying here…]
  • Foster an open environment where users can talk candidly with IT and security, without expecting laughter or a quick rebuke.
    This is going to be much like the TSA assuming every passenger is a threat.

  • Will need an aggressive and automatic patching solution to keep the OS and major applications patched as much as possible.
  • Have a strong imaging solution and architecture in place. People mess up their computers now and then and require them to be re-imaged. People who control their own computers will mess them up even more.
  • Have strong network and file server anti-virus or malware scanning. Chances are pretty good that users will store their backup installs on your file server. Try to separate the screensaver crapware from the necessary stuff.
  • Be proactive in supporting the software inventory needs of your users. If a user has a piece of software they had the company purchase, keep an inventory or even a backup of the install disk and serial under lock and key. This is far better than letting users manage (or steal! or lose!) their own copies. A photoshop disc left on a desk is a pretty easy crime of opportunity.
  • Plan to have strong remote management of user’s systems, especially when it comes to inventorying various things, such as accounts, installed software, running processes, resource consumption, log gathering. You likely won’t parse these out regularly, but some you might want alerts for, such as new user accounts appearing.
  • Proactively offer to assist users with any PC questions they may have. Often, users have lots of little annoyances they live with, but offering to help with the fixable ones can often go a long way towards satisfaction not just with IT but their job as well. If a system is running slow or they don’t understand why a window displays as it does, assist them with fixing it.
  • When assisting users, take extra effort to include willing users in your troubleshooting. This not only opens lines of communication, but also teaches them as you go. Maybe next time they’ll already have checked for that rogue process before you get to their desk!
  • Might be wise to evaluate DLP technologies. While administrative rights for users on their desktop means many forms of malware will do things like disable AV before it can interject, many users are not nearly as sophisticated when they purposely or accidentally move important data from the safety of the corporate environment to an outside entity. It might be enough to implement DLP to stop all but the truly crafty and determined insiders. That might be risk avoidance enough to deal with the determined ones on a case by case basis.

Sadly, the reality is a company that likely wants to have local administrative rights is likely too small to meet the needs listed above without some assistance.

responses to concepts for managers to understand

Mubix has posted his summary on things we wish our managers would learn, which I commented about the other day.

The #10 entry was about company buy-in and had only 1 vote, but I wonder if that single issue may drive a majority of the rest of the problems. It might not be that our managers don’t get these topics, but they may be in the same boat as we are in feeling unsatiated with current results.

If there is any bias, it might come from how we read the question and how far up the chain our manager is. If my manager were the CTO/CSO/CEO I think I would answer more along the lines of #10. Maybe a good question would be, “what one concept would you want your company leaders to understand?” That would probably limit those technical responses and probably broaden the basic concepts part?

Or maybe what would be your security-related mission statement (and maybe a few supporting statements in case you think of mission statements as “make the world a better place”) for your company?

from vulnerability to root in a few taps and clicks

SANS has published a story on an attack that bypassed a .NET/ASP web front end and poked a local escalation. The tools mentioned can be found: Churrasco (has the full description), Churrasco2 (updated for win2008), and ASPXSpy (.NET webshell). Note that McAfee AV does detect the file aspxspy.aspx as naughty.

…developers wonder why I don’t let their apps write locally…or publish directly since my replication removes rogue files automagically…

a security incident when you have no security posture

I didn’t expect to be quite as entertained by this story as I was. I apologize for not knowing where I got linked to this, but CSOOnline has the first part of a two-part story on how a company that suffered a data breach did everything wrong. These are the sorts of stories that need to be told. Repeatedly. I don’t care if authors are anonymous and specific details scrubbing to protect the guilty and victimized. But this sort of stuff shares details, and that’s what we continue to need. We need it to learn from, and we need them to show others tangible illustrations of the risk.

…They lacked the equipment to detect a breach and, even if they did, lacked the human resources to monitor such equipment. He told us his staff consists of one full-time employee and one half-time assistant who is shared with the help desk… [ed.: a company of 10,000 users, 127 sites…]

“What logs? Remember that each business unit is different, but here at corporate we don’t have logs. In fact, logging was turned off by the help desk because they got tired of responding to false alarms. Help desk reports to the IT director, not to security.

Everything starts with a basic policy from senior management that says security is important. From there flows talented staff who aren’t going to just disable pesky alerts or be pulled in the IT operations/support direction 100% of the time. And so on…

it’s ok to break into things if you’re just demonstrating

Speaking of ethics, the BBC decided to do some of its own hacking for a show, Click.

The technology show Click acquired a network of 22,000 hijacked computers – known as a botnet – and ordered the infected machines to send out spam messages to test email addresses and attack a website, with permission, by bombarding it with requests.

Click also modified the infected computers’ desktop wallpaper.

If the BBC doesn’t get hurt by this, the lesson we can all learn is: Make sure you have a hobby in journalism/reporting/television. That way next time you get caught cracking into something, you can just say it is for research and part of a report you’re doing. Then we can all laugh and share a pint because it’s all good then!

Oh, and next time I accelerate my fist into your abdomen, just let it be known it was without criminal intent. Over and over. Maybe I’ll laugh during that too, to show I have no ill will in it.

powershell: executing remote scripts from a script

This is a complicated issue and may only make sense to me, but I’d like to document for future reference. I’ll try to simplify as much as possible to stick to the crux of the matter: remotely executing powershell scripts from powershell scripts.

Pretend I have 3 web servers. On each server a powershell maintenance script perpetually runs (infinite loop). If I have a new web site to build, I edit a text file on a network folder. The maintenance scripts see this and execute a “createsite” script. Sometimes, due to down time if IIS needs to be stopped, I need these scripts to run in an orderly fashion. So one maintenance script is always a “master” of the others.

I’ve finally gotten sick of having a perpetual script running on each server (using resources and requiring an interactive login). What I want is one server which coordinates the execution of all my other little task scripts on the 3 web servers. Yup, I need to figure out remote execution!

Yes, Powershell v2 has decent remoting capabilities, but I can’t effectively leverage them quickly. We still use IIS6, my web servers have Powershell v1, and there is a lot of rewrite time for the scripts to update them properly. Instead, I’d like to quickly get going with this architecture with as little effort as possible.

I’ll use psexec and Powershell.

First, I need to make sure the account that Powershell will run under has a profile file set up. All of my scripts run out of d:\setup\scripts. If I want to start a remote powershell under a user and be able to relatively reference other scripts inside my first one, I need that user’s profile to start in d:\setup\scripts.

Create the file profile.ps1 in ..\documents and settings\script user\my documents\WindowsPowerShell\. The contents:

set-location d:\setup\scripts

This is the call I do on another server with an account that I designate as my web installer:

./psexec \\WEBSERVER1 -u DOMAIN\USER -p ‘PASSWORD’ /accepteula cmd /c “echo . | powershell -noninteractive -command `”& ‘d:\setup\scripts\createsites.ps1’`””

Whoa, wait, what’s the “echo . |” thing in there? That allows me to see the progress of my script, and properly lets psexec work on the target machine so my calling script can continue on with life. I found that just calling a powershell instance led to powershell/psexec never executing properly.

Did I need that -u and -p declared? Strangely, I did, even though the script was running as that user. If this wasn’t declared, I don’t think the Powershell profile was loading properly.

Questions?

Why don’t I use functions in my maintenance script instead of a separate script for my major tasks? I have many other pieces beyond a “createsites” task, some of which I call separately anyway. I’d much rather manage smaller scripts than one large beast of one. I’m not a software developer. 🙂

Why not use Task Scheduler? Let’s just say I don’t want Task Scheduler running on production web servers. And I want all my web servers to be managed the same.