I wrote this post as a draft several years ago. This isn’t really finished, but I thought I’d post it anyway.
A friend of mine works for a small company that recently bought an even smaller company on one of the coasts. That smaller entity recently experienced a security breach. While this isn’t a juicy attack or really anything beyond your everyday opportunistic breach, it can certainly be exotic for a small business that doesn’t experience “real” breaches very often at all.
The scenario.
On Christmas Eve, about 8 months after the acquisition, users from that company reported issues syncing their iPhones to the company Exchange server. The initial admin to investigate found suspicious issues and called up to my buddy’s IT team.
Turns out an attacker had been pounding on an externally accessible RDP session for several months. The log files are spammed with attempt failures. The firm that manages these servers for the newly acquired company prefers to access the servers via RDP sessions, which are allowed via an Any->Server:3389 rule in the firewall. The attacker guessed the administrator password, logged into the RDP console, and had full access to the system.
Oh, and this system isn’t just the web front end for their Exchange infrastructure, but actually is the Exchange server.
Once inside, the attacker created a new local admin account named “Common.” He then installed Filezilla. I imagine he did so either to exfiltrate data from the server (no evidence of that) or to upload his tools to the server (evidence of that). There are no egress firewall rules, but ingress did limit traditional FTP connections. So the attacker configured the Filezilla service to use port 80. This is what triggered user uproar about email sync issues.
From what I know from my friend, the attacker tried to run a tool called MassSender.exe, and a quick glance at evidence indicated the motive for the attack was to acquire a new spam-generating server. Once that FTP service was found and full realization that this was not just an email front end system, but the actual Exchange server, my friend backed off and brought in a third party for full analysis. Through this whole process, some remediation steps were taken, but the system in question remained functional and live throughout.
So what went wrong here, or what lessons can be had? Warning: I do make a few presumptions here, but nothing completely wild, and I’ll try to make mention when I do.
Prevention
1. Change/restrict default admin account – The local administrator account should be renamed.
2. Limit access or don’t use RDP for true remote management. – RDP open to the world isn’t a good idea, especially in default configurations. If you really need to, it might be best to limit via firewall rules what sources are allowed to talk to RDP.
3. Review your ingress/egress firewall rulesets. – Turns out every server had several firewall ingress points: RDP, tcp 80, tcp 443, even when those servers didn’t use those services. Firewall rules should only be allowing what you need, and are probably going to be different for various devices. A blanket standard won’t work. Essentially, this just illustrates lack of security-minded planning and regular firewall rule maintenance. The home office was surprised to learn of these allowances.
4. Administrator password – I don’t know how difficult the administrator password was, but clearly it was eventually guessable via brute force. Further, it was the same password across all backend systems, but the attacker never went further than the first server. Really put some thought into making all of those passwords very difficult and unique across servers.
Detection
1. Check logs – Always easier said than done, I’ll just say that evidence of a brute force attack on RDP, failed logins, was spammed in the logs for months. No one noticed this.
2. Installed software – Do you ever want to now know when software is installed on your servers? You should inventory and/or somehow check on the appearance of new software. It’s very noisy for an attacker to actually install something, but clearly it does happen.
3. New executables/New network access – This is pretty advanced, but you might want to know when a new process fires up, especially if it also accesses the network. Again, this is advanced.
4. Egress firewall rules and traffic analysis – Now, just having egress firewall rules doesn’t detect anything, but you should be able to interrogate blocked traffic to see whether you expected it or not, as well as perform some netflow analysis to maybe spot suddenly new trends or communication paths. This is also pretty advanced, as most people have more to do than sit and analyze this stuff, sad to say.
5. Disabling of security software – Yeah, the security software was disabled by the new admin. This shouldn’t normally happen and something should flag it, from services monitors to centralized security management dashboards or log analysis.
6. New accounts/new admin accounts – If you do any log analysis, this is an absolute bottom-of-the-barrel item to watch and alarm on. If a new account appears, you want to know it. If a new local admin account appears, you *really* want to know it.
7. Users and admin gut feelings are still great detectors – Security doesn’t like to admit it (even operations doesn’t), but some of the best detection mechanisms will always be the users as well as gut feelings from admins who just feel like something is wrong, out of place, or a quick glance at logs while passing through a server catches their attention. It sure would be nice that detections could happen without these things, but always, always, always remember that at least you have this layer. It’s still better than being told by authorities or a third party that you’ve been fuxxed.