lessons from a cyberdefense competition red team part 2

This is a 3-part account of my experience on the red team for the ISU CyberDefense Competition. Part 1, Part 2, Part 3

Observations on cyber security
– Web application attacks and web site defacements are fun and tend to gather attention, from attackers and onlookers alike. Sadly, such attacks first of all don’t bring services down, and so don’t penalize teams all the that much. Secondly, they result in little gain for the red team; a short-lived, and very public victory that results in no root access, no further gains, and most likely closed vulnerabilities.

– Network and OS attacks are or have gone out of style, even against systems that have their balls left hanging out into the public network space. Several ripe systems survived a long time while people tinkered with web app vulnerabilities. Beyond nmap scans, I saw very little network twiddling or system/service attacks. This may mean over the next few years, organizational network security and hardening may atrophy as everyone focuses on the web and clients.

– The perimeter is still a battleground, wherever it may be. One team was rumored to be running ipv6 on their inside network, which would have made things interesting for the Red Team had we ever gotten inside that network. Unfortunately, most efforts were spent pounding and poking the external surfaces and not cruising the internal networks. Note that while a web server is part of the perimeter, it has several exposed layers: application, server app, OS. And someone wants to say defense in depth is dead?

– A majority of the time spent by red team members was exploiting holes in the web apps. There were several openings to upload files and execute php code on the web servers. Unfortunately, once these attacks became apparent, such holes were closed and access very limited. Some teams opted to break upload capabilties, others removed such sections from their site, and others were more correct by not allowing specific php executions or overwrites. Some teams left their /etc/passwd file exposed to such attacks, but all cases had shadowed passwords. More defense in depth…

– Doing the fundamentals greatly increases survivability on the Internet. Put up a firewall and make sure your perimeter doesn’t leak information or have excessive holes. Keep patches up to date on systems and services. Change default passwords and accounts. And after those are done, then pay some special attention to the users such services run under, the web applications, and the web servers. I can see why network attacks are being eclipsed by web-based attacks because the ability to secure web apps requires an amazingly huge skill set and experience/knowledge. Do you know how to securely write interactive php apps running on Apache such that they hold up to attacks? That’s a job in itself for more than just one person to administer. Once the fundamentals are taken care of, it really takes a special attacker to make headway into your network. Do they have 0day exploits for your patched services, or the ability to create them? It really does take special skills to do that. I think only at the DefCon CTF competition might there be such expertise in an organized competition. If you get the fundamentals down and your web apps and servers are solid, start changing service banners to really raise the bar for attackers. The fundamentals are money in the pockets of your company.

(A side note, that not every IT person knows the fundamentals, even today. A lot of these students went into this competition missing the fundamentals, and many more will leave the program without a firm grasp of them. LEARN THE FUNDAMENTALS!)

– For the love of God, make an attacker’s life difficult. Make the firewall not respond to closed ports. Make my nmap scan take a long time and throw everything back at me as filtered. Make me earn my scans. Make sure egress on the firewall is strictly configured so even if I do get something planted, it might not be able to call back (and will be exposed in the logs).

– Read the fucking logs. Wow. Once a server is up and running and a service is being used, tail all the logs. The logs will reveal who the attackers are and what they are probing. Especially those web app logs! Typically once a vulnerability is found, the one red team member exclaims his find and the other 14 members of the team clamor over to see it themselves. Suddenly a ton of hits on a seemingly normal part of the site may be cause for alarm. And after an incident does occur, those logs are your survival. You can put a defaced site back up, but it will just get owned again. Check the logs and close the holes, either by changing the code, removing the offending pages, or adjusting server protections to disallow certain things.

– Similar to logs, it can be amazingly insightful to have an outside packet logger or IDS system hanging off the external interface of the perimeter. Even a DoS against the firewall can be detected and seen and diagnosed and fixed with such data. Without this data, a few teams were left wondering why their service seemed down. They looked at the service and box it was running on rather than the firewall on the network.

Coolest attack
The most interesting attack I saw was actually a DoS attack. The attacker called it a reverse LeBrea attack, although it could be called tarpitting a server or FIN starvation attack. In it, the attacker opens hundreds of valid TCP connections and then begins the teardown sequence by sending an ACK/FIN packet. The server responds with an ACK and its own ACK/FIN packet, and then waits for the final ACK response from the attacker. The attacker, however, does not send that ACK packet back, which keeps the server’s connection open. Nicely enough, the server itself is not very busy since it is only waiting for hundreds of open connections to end. This can be eliminated in numerous ways such as detecting DoS attacks and adjusting the firewall to block after a threshold of connections have been made, or by aborting out of TCP teardowns or lowering the wait time. Iptables is susceptible to this unless steps are taken to correct the behavior.

Some false lessons
While lots of lessons and insight can be learned, there are some red herrings and false lessons that hopefully no one takes away.

– Applications are not always secure. Almost every team was running fairly new versions of their applications and systems, from new phpBB and SquirrelMail implementations to new Ubuntu boxes. While it might seem impregnable today, will it be impregnable in a year in a company with poor patch management? Nope. Those little applications like a wiki or phpBB can quickly draw cold sweat on the necks of IT admins…

– While the red team had a lot of talent in one room, I still wouldn’t consider any of them black hat hackers by any means. Most were still students, and only a handful were security professionals. The skillsets only go so far, but the real world can throw any level of skills at you.

– Attacks can still come from within. Pretty much out of scope was social engineering on any scale that would allow unsupervised physical access to systems. Also, attacks on clients such as emailed trojans was not really possible. There really weren’t that many systems on the inside to dupe and gain a foothold. This was more like attacking a standalone DMZ. But don’t forget every system and user is a target.

– DoS attacks are still debilitating and not only can end your night quickly in a competition, but can close a business just as quickly in the real world. And for as debilitating as a DoS attack is, it is typically the least planned-for and one of the harder attacks to thwart, in theory. (Ok, DDoS is harder, just to pre-empt commentors saying it is easy!)

Personal lessons
I’m out of practice. A competition is not the place to fire up Metasploit 3 for the first time (although thankfully I have used Metasploit 2 in the past). Likewise, know general tools to use for basic stuff, and practice them. Domain transfers, nmap scanning, OS/service fingerprinting (both from a scanner but also from just using the services, like Apache running on Ubuntu). I’m rusty on almost everything, so practicing is definitely in order. It’s just one of those things I don’t get to do on a daily basis on the job (or even weekly or monthly!). Know BackTrack tools inside and out. Be familiar with wireless attacks both as an associated client (airpwn, hamster/ferret, rogue AP, MITM) and as an outside attacker (DoS, WEP cracking, IV generation). Knowing these things well up front goes a long way to being an efficient attacker. Just like the defenders, attackers need to know the fundamentals, and practice them regularly. This can lead to less time spent relearning tools or settings, and more time being surgical and more creative. It can also mean less jubilation at low-level triumphs; rather thinking how to leverage those hidden lower triumphs to get the most gain over the long run.

lessons from a cyberdefense competition red team part 1

This is a 3-part account of my experience on the red team for the ISU CyberDefense Competition. Part 1, Part 2, Part 3

This weekend Iowa State University held its annual CyberDefense Competition in Ames, Iowa. The event is hosted by students and faculty from the Information Assurance Student Group and the Electrical and Computer Engineering department. In the event, teams of students attempt to deploy and manage various services representative of normal business applications. During the 20 hours the event covers, the teams are scored on their service uptimes as tracked by network monitoring (Nagios) and other neutral teams acting as normal users of the services. In addition, much like the real world, there is another team of students, faculty, and area professionals acting as attackers, intent on owning and bringing down those offered services. The services the teams were required to offer were web services (with pre-packaged web content), mail (smtp and imap), a telnet shell, ftp, wireless access for normal users, and dns to get it all working.

The teams are made up of regular students, I believe mostly for class requirements in a couple classes. There were 15 teams ranging from what looked like 4 members up to maybe a dozen. The students have had time in advance to plan and implement their services. To illustrate the aptitude of the teams, at the start of the event only about half the teams had their services up and running. Even through the course of the night, not every team was having success getting services up, while other teams were more advanced and running ipcop, pf, iptables firewalls, and hosting services on Linux or even MS Exchange on down to SquirrelMail or IIS mail. This illustrates the widely varied skill levels in these teams. Some teams had everything choked down behind a firewall while others had disparate boxes sitting on public IPs and others were having problems with DNS configs.

The “world” for this event is a bit interesting in itself. The teams were allowed to use publicly routable IP addresses because the event was hosted in the Iowa State ISEAGE system. ISEAGE (Internet-Scale Event and Attack Generation Environment) is a mostly closed network that simulates the Internet on a small scale to model attacks and other research activities.

At the end of the event, teams were scored and a winner announced, but more important are the lessons learned from the event itself. How difficult or easy is it to put up and manage services for a business, attent to the needed systems, and react to security events. What works and what didn’t work. Hopefully everyone went away feeling at least a bit more enlightened about the world of professional IT, no matter what their end performance in the event itself.

On a more personal note, I certainly wish I not only had more interest in the field of networking and security when I was in college, but I also wish we had these kinds of groups. I graduated in 2001 with a degree in MIS from ISU, but I never had any security courses (and almost no security emphasis in programming or other classes) and my only networking exposure came in my last semester. I graduated having never really installed an operating system or upgraded one, nor knowing much of anything about normal business services and technology. I’m amazed where I’ve come since then, and I’m amazed that college studies are starting to catch up to the real world of IT and out of the academic “let’s just teach everyone C and theory” practices. A competition like this where students install and work on these services is downright invaluable, even to those who didn’t successfully get services running. IT is so much about doing things, not about sitting in a classroom and listening to a lecture about the theories.

license to be a digital nuisance

Michael has Friday off. Michael has the day off because he will be attending Iowa State U’s 3rd (?) annual CyberDefense Competition as a member of the Red Team. Michael would link to the site, but even his Google-fu is not yielding up a currently active site. This CyberDefense Competition hosts teams of college students trying to run servers and systems in a hostile environment for over 24 hours. Michael anticipates having a fun time and likely learning far more than he is able give.

reboot a system in powershell

For a script I have that maintains our systems and installs new versions of our web code, I have the occassional need to also reboot a server during after that install. Like most things programming, there are several ways to script a reboot.

This first example (my preferred method) reboots a remote system.

$objServerOS = gwmi win32_operatingsystem -computer servername
$objServerOS.reboot()

Leave off the “-computer servername” to reboot the local system.

This second method is similar. The following lines will reboot the local system, force the reboot of the local system, or reboot a remote system.

(gwmi win32_operatingsystem).Win32Shutdown(2)

(gwmi win32_operatingsystem).Win32Shutdown(6)

(gwmi win32_operatingsystem -ComputerName Server).Win32Shutdown(6)

Here is a list of the codes that can be used.

0 -Log Off
4 -Forced Log Off
1 -Shutdown
5 -Forced Shutdown
2 -Reboot
6 -Forced Reboot
8 -Power Off
12 -Forced Power Off

Pipe $objServerOS or (gwmi win32_operatingsystem) to Get-Member to see more goodies.

There is a good chance the above commands will error when trying to do a reboot, complaining about privileges not held, even if you’re running as admin. Add a privs enable line in between the above two, and it will process just fine.

$objServer = gwmi win32_operatingsystem
$objServer.psbase.Scope.Options.EnablePrivileges = $true
$objServer.reboot()

richard clarke’s five steps to save the internet

Richard Clarke recently spoke at a conference and listed five steps to save the Internet. Here is a brief on the five steps:

1. National biometric ID
2. More government oversight of the Internet
3. Nonpartisan government oversight to protect privacy
4. Secure software standards
5. A closed Internet for critical services like the power grid

1- I don’t like the idea of a national ID or using biometrics, but I do know that social security numbers are antiquated and broken. They’re just not working anymore in our ultra-efficient information age. I agree change needs to happen; I don’t know what solution I would like. Something similar to what all the cyberpunk visionaries have written about for decades is most likely inevitable. An inevitable evil. I’ve long felt that a major hurdle for the Internet deals with identity; trusting it and verifying it. And no, I don’t think OpenID is the obvious solution.

2- I don’t like this either, and hopefully it won’t happen; but I am surprised ISPs and the Net have held out this long and this well. Hopefully it stays that way.

3- Maybe I’m old-fashioned already, but isn’t privacy oversight covered by the judicial branch?

4- This is obvious that we need better standards. Is the government the proper standards-bearer? I doubt it, and I definitely wouldn’t hang my hat on getting this done enough to make an ultimate difference. It will help, as part of a blended improvement to cyber security and software security.

5- Hrm, again, I might be old-fashioned, but I call this either a private network or a network with strong perimeters and controls. I think Clarke is looking for attention and media drama by calling it a closed Internet, but I don’t think that’s what he’s really meaning to talk about. Why do you need a closed Internet and how is that different from a private WAN network? Open access to the web and other services? You mean like the walled garden from AOL? I’ll dismiss this point because it is just bait and hype, nothing more.

defcon 15 video on the dirty secrets of the security industry

Finally getting around to watching DefCon videos, and I started out with Bruce Potter’s Dirty Secrets of the Security Industry presentation. I’ve seen recordings of Bruce Potter talks before at ShmooCon, and I’ve enjoyed his presence. Definitely a cool guy with a lot of passion for the industry, and I think he’s open to creating discussion, even if he knows he’s wrong and just trying to get everyone to think. I can’t help but admire that! Here are some notes, followed by reactions of mine. I definitely recommend watching this talk. Everything in blockquotes are paraphrases or quotes lifted from the slides and presentations.

Bruce opened by talking about some foundational concepts and history of security. He made a point to show that security is still growing and making more and more money. He then went into his dirty little secrets.

Secret #1 – Defense in Depth is Dead – The problem is in the code. We’ve always had bad code. Fix the code. Firewalls don’t help things that have to be inherently open, like port 25 to the Internet for the mail server. Spending way too much money and time with defense in depth! Need type safety (programming), secure coding taught in schools, and trusted computing. We need better software controls on our systems, not better firewalls.

I’m hearing a lot more about this lately, about how we need inherently secure systems and devices and protocols. 🙂 All his points are good, and I really don’t oppose outright a viewpoint like this. We need better training for software developers and we really do spend a shit-ton of money on more and more defenses that are band-aids to deeper problems.

However, I don’t think defense in depth is dead. I think he has great points, but I’d throw a shmoo ball at him for the sensational title of the secret. 🙂 We’re humans, and humans are producing code. It just takes one incident (which he says in a later slide) and defenses can break. That’s the point of defense in depth. Not necessarily about band-aiding insecure code, but rather ensuring that 1) we account for mistakes and unknown holes, and 2) we make sure attackers have to really try, or collude, or take a lot of time. If I can solve issue GER, and that’s your only defense, I win. If I have to solve issue GER plus LIG, I’m stuck…or I have to find help or spend more time breaking in.

This defense in depth approach only makes it *look like* we’re just band-aiding insecure code, which we kind of are, but that’s just an ancillary issue. To put it better: it’s an arguable position. (Marcin, if you’re reading this, yes, I use these $10 words all the time!)

Secret #2 – We are over a decade away from professionalizing the workforce – Much of our jobs is learned through self-education, not professional education centered around security. How do we codify and instruct the next generation? Security is everyone’s problem…because no one really knows how to properly do it. We can’t train all our professionals, how do we expect to train all our users? Users need tools that they can’t screw up; that don’t require education to be used securely. Years and years away from making this better.

A-fucking-men. First of all, he’s got a point about not being professional yet. I went to school and got an MIS degree, which is, in effect, Comp Sci Lite. Did I get any information about security? Not a bit. Hell, I was barely prepared for a real technical job…I was more prepared to be a clueless analyst than technical. Bruce is absolutely fucking right that we’re almost all completely self-taught, either on the job or on our own. That’s not a professional workforce or industry. Not yet anyway.

I love his mention that security is not everyone’s problem. I love his mention that users don’t need training, they need tools they can’t fuck up. Absolutely! Likewise, if we pour a bunch of money into training, and an idiot or new user shows up and makes a mistake, all of that is wasted. We need the technological controls, and we need the secure systems, and we need the simplicity more than we need high-end training such that security can be everybody’s problem. That’s not to say I’m all about de-perimeterization!

But that gets back to defense in depth. Users will make mistakes, which is also what defense in depth helps to mitigate. Yes, I think the industry has gone overboard and yes, we spend way too much money on many levels of defense, and we need to start spending that money smarter, on better defenses, and more secure foundations. More on that coming up…

Secret #3 – Many of the security product vendors are about to be at odds with the rest of IT – The security industry has sold a lot of defense in depth; a lot of money that isn’t going to securing the foundation. Bruce uses Microsoft as a case study: Microsoft tries to make a more secure foundation, but then the vendors start complaining, and Microsoft has to bend and allow unsigned driver interaction.

Excellent points. In fact, this is an issue in more than just security. Lots of money are being spent on software and systems and security, and we’re starting to question, “Why?” “Why did I spend XXX on 3 years of software assurance for MS SQL Server, when no new product came out from 2000 until 2005?” I have used the example of Microsoft trying to secure its own product in the past, because it dramatically illustrates how our landscape has changed, and how the maturation of the security industry has agendas to protect. I’ve been saying that Microsoft can’t just out and create a secure OS anymore. The vendors won’t let them. They’ll have to do it slowly, like boiling a lobster.

Defense in depth may not be dead, but he has a point that we really are spending too much on it.

Secret #4 – Full Disclosure is Dead – There is too much money to be made in selling bugs; even companies are paying for vulnerabilities. We want to make live systems more resilient to attack, but this market for vulns means those companies are (potentially) profiting at the expense of the end user.

Again, very true, and that last sentence I don’t think I had realized outright before this talk. I still believe in full and/or responsible disclosure, but at least now I have some logic behind the bad taste those “pay for my vuln” scams leave in my mouth.

Bruce quickly closed out by re-emphasizing some of his suggestions:

Recognize that the landscape has changed. Push vendors to make products that actually create a secure foundation, not just more layers. We need to create a more formal body of knowledge for info security, and hold each other accountable.

This is an excellent talk, and I really love what he brings to the table. He wants to stir things up a bit, open discussions, and maybe even be wrong. But that’s the sort of openness we need to keep striving for. He had a real brief mention about being open and sharing information rather than bottling it up to sell in a non-disclosure vulnerability; to not stand in line politely but to keep the energy we know we have when it comes to toeing the line. I can only imagine how a group conversation at a bar can likely last all night long about this stuff!

in linkedin

Oh noes, I’m in LinkedIn. Those of you who have bugged me…ok, ok, I’m digging it. I like that only people in my “network” get to see anything worthwhile about me. Anyway, if you read my blog at all and are in my network, chances are you’re “ok” to add, so feel free to find me, Michael Dickey. Or email/comment and I’ll find you instead.

when production data is allowed to visit the slums

Adam over at EmergentChaos posted this blurb, which I’m also going to quote, in regards to an Accenture data loss incident:

Connecticut hired Accenture to develop network systems that would allow it to consolidate payroll, accounting, personnel and other functions. Information related to Connecticut’s employees was contained on a data tape stolen from the car of an Accenture intern working on an unrelated, though similar project for the State of Ohio. (The tape also contained personal information on about 1.3 million Ohio residents.) The intern apparently had been using the Connecticut program as a template for the Ohio project.

Holy shit, do I hate when developers insist on using protection data in development environments. It is amazing how difficult that fight can be to get them to use test data, or to take production data and thoroughly scrub it on the first copy down. Of course, later on they want “refreshes” downward, or they start sharing amongst themselves when one wins the fight for their project…

Couple that with the fight to allow them to put such data on their laptop, and you get a lot of bad blood pretty quickly over just two out of a gazillion issues.

It is going to be very important in coming years that companies who allow their data to be used by someone else will want written statements about who has access to their data and where. Will it be on development systems in the squishy internal network, or available for an intern to query out and take home? Can you provide names of everyone that will have access? If you have any DBA duties, start preparing for this storm now! These questions are being whispered now*, but often aren’t taken too seriously…yet.

1- Know who has access to what data, including queries as well as full database access.
2- Provide a process for requests and approvals for access to databases.
3- Know who ran what and took out what data. If an intern pulls a bunch out, you better well know it when they do. Know how to pull those logs and massage them for the answers.

These are just a few basic management questions that, if not answered, will leave them in a position to make uninformed decisions and actions.

As a side note, other questions are being and should be asked about the whole lifecycle of that data when it leaves the nest. Is the transmission of that data secure (SFTP, FTP, Web, Email…)? Is the first stop for that data secure and/or temporary (your contact’s email box, the ftp server…)? How does the data get to the desired location or is it kept somewhere internally before being used (uploaded to a file server, sits in someone’s PST file, gets backed up to tape from the ftp server…)? When at the end location (database, hopefully), who has access to it? When the work is done or the contract terminated, what is the data removal process (tapes, servers, databases, official backups, backups the developers have made…). Yes, it’s more than the DBA, but really the easiest place to start is with the DBA duties.

powershell: list of sites in IIS

Getting a list of all the sites in IIS 6 is typically as easy as right-clicking Web Sites and choosing Export List. I decided I wanted to do this through PowerShell. I’m sure there are plenty of ways to do this, but this is one I got to use today.

$objSites = [adsi]”IIS://serverorlocalhost/W3SVC”
foreach ($objChild in $objSites.Psbase.children)
{ $objChild.servercomment }

This should output all the names of the sites that you would see in the IIS management console. If you want to know what else can be pulled, grab one of those objects and pipe it to get-member. I haven’t figured out how to pull the Home Directory, but IP should be under .ServerBindings, the ID should be under .Name, and so on. I suspect IIS7 will be even easier to manage via PowerShell.

imagine an open sourced axis network camera

I wasn’t going to post about the recent vulns released about Axis 2100 IP cameras. They are neat vulns which illustrate dangers that XSS and CSRF can bring to devices with web interfaces or how even internal sites can become exploited grounds. I especially like that you can replace a video feed which you always see so effortlessly executed in movies. I really like the vuln where viewing the log files will execute javascript; which reminds me of a recent WS_FTP DoS that works in similar fashion. There are a couple videos out there showing off the exploit. Both links are in the paper (pdf).

No, I wasn’t going to post it because I figured it would get covered well enough anyway. But then I read the paper. And on one of the last pages of the paper is the real meat that made me think, “Aw yeah!” The authors describe how they were able to glean enough information from an Axis development wiki to probably compile their own tools. Whoa, this just went to another level! Axis may not support this particular device anymore, but if people can successfully compile and upload tools into this device, we could see a resurgence of popularity that may mimic (in smaller scale) the popularity of Linksys’ WRT54G wireless router.

I really think Axis could take advantage of this interest and help anyone looking to build tools. I mean that seriously…if they decide to open source it more…

o3 magazine is back

I was scrolling through the latest Insecure magazine (did they get swallowed by net-security.org…?) and saw an ad on the very last page for o3 magazine? Huh? They appeared to be inactive earlier this year; in fact, their web site disappeared.

I checked the site and sure enough, they’re not just back, but back with lots of content. Issue 6 and 7 were released in August 2007, and Issue 8 and 9 just came out for Sept 2007. Weird, but welcomed!

on being aware of your environment

This ran across my Art of War calendar today, and makes a good statement about detection/logging, which is still being undervalued in today’s organizations.

What everyone knows is what has already happened or become obvious. What the aware individual knows is what had not yet taken shape, what has not yet occurred. Everyone says victory in battle is good, but if you see the subtle and notice the hidden so as to seize victory where there is no form, this is really good. Chapter 4: Formation

new book releases

A couple new books have been spotted in the wild! I ordered Security Power Tools last week from Bookpool. It should be arriving today or tomorrow! Yeah, I have books dealing with open source security tools like Nessus and the like. But I like the hands-on practical look that this book appears to include. I thumbed through it at the store this weekend and was surprised at the detail and also how thick the book is! I’m not sure I’ve seen an O’Reilly book this thick before. I love books like these, since I do believe someday the back will be broken in IT + security where spending more money on ever-spendy tools and appliances will not be an option, but we’ll still need to Get Shit Done. Open source and other freer tools are still going to be the future reality for most of us. Books like Special Ops, Hack Attacks, and Hack I.T. are mere blitzes compared to more indept information on using tools.

I also want to find and pick up Metasploit Toolkit for Penetration Testing, Exploit Development, and Vulnerability Research. This will be kickstarting my dive into Metasploit 3, since I’ve put off that transition for too long now, and is a book I’ve been anticipating for some time. I checked around yesterday, but the local stores didn’t have copies available. Since Bookpool is susprisingly not discounting this, I may as well pick it up at the store!

security even a caveman can break

I saw via Bejtlich that InformationWeek has an excellent article up about Robert Moore, the hacker who, a few years ago, broke into quite a few telecom (and likely other) organizations to route and steal VOIP.

The article continues to pound home that we’re doing the simple things very badly. And we have no friggin’ clue when someone malicious is doing things inside our network. Here’s some meat, though:

“It’s a huge problem, but it’s a problem the IT industry has known about for at least two decades and we haven’t made much progress in fixing it,” said van Wyk. “People focus on functionality when they’re setting up a system. Does the thing work? Yes. Fine, move on. They don’t spend the time doing the housework and cleaning things up.”

That’s really a huge part of the problem, isn’t it? Implement VOIP, and hope that you get time to get back to it later to evaluate the security before your next big projects come up. And so on.

Really, I feel that this problem is twofold. First, we’re still maturing in our grasp of technology. Unfortunately, and *naturally,* the attackers are maturing faster. This happens in biology as well, so we need to accept and expect it as a given. Second, having the time and resources to either do the job correct up front or revisit the job later and fix it up.