Wednesday, June 18, 2008

Hacked

Today I received a very unhappy email from a fellow saying my webserver had launched an attack against his FTP server and that I needed to stop it or he would contact the Federal Authorities. I was unbelieving at first, to be perfectly honest, and asked him to produce logs verifying the attack. But then I went and checked my server and discovered it was running a script named ftp_scanner, which seemed to be attempting brute force attacks against random FTP servers. ack.

I quickly killed all the ftp_scanner processes, found the offending script on the server (cleverly hidden in /tmp/.../ so as to be both hidden from a standard 'ls' and appear like a system file when running 'ls -a'). The immediate problem addressed, I tried to figure out how this could have happened. To my horror, I discovered that Thursday of last week someone had run a brute force attack against my SSH server and happened upon one of my users whose password was the same as her username. double ack!

A little back story is useful here... on Friday my server went down in a sort of funky way. I could still ping it, but http and ssh access were denied. It took all weekend working with my provider to get it re-enabled. They said it was because CPU usage had spiked, and since it's a virtualized server, my slice was shutoff to prevent damage to the larger system. I should have investigated then, but I just figured the detection systems were borked and thought nothing of it. Bad idea.

Two days later, the intrepid attackers struck again... and I would never have known if not for the email from the poor guy whose server my server was attacking. But that's not the worst of it. While cleaning things up, I noticed an SSH login to the 'news' account, which is a system user account that you cannot usually log into. It was then that I discovered the /etc/shadow password file had been compromised to enable a variety of logins that should not have been. This, unfortunately, was the worse possible news. If the attackers could change /etc/shadow, it meant they had manged to obtain root level access to my server. ack, ack, ack.

I went back to the /tmp/.../ folder to poke around the contents. It was then that I discovered the Linux vmsplice Local Root Exploit. And indeed, running the tests described my system was vulnerable, and the entire slice had been compromised. Since I don't run tripwire, or anything like that, I was pretty much screwed. oh, ack...

All user data is now backed up onto my local desktop and the slice is scheduled to be cleared. Once the kernel is secured I will have to start building the system from the ground up all over again.

Oh, and if "Not Rick" is out there, I'm sorry to have caused you any trouble... but contacting me via means that prevent me from replying makes it difficult to apologize or explain the situation.

1 comment:

Anonymous said...

You could try a system similar to what I use: I've replaced syslogd with metalog, primarily because metalog allows you to trigger a script to be run on particular log events. Whenever a failed ssh attempt is logged, I take several actions:

1) I test the attempt against a blacklist (root, test, apache, etc). If the failed login matches one of these, I drop the remote IP at the firewall level and schedule a future command to undrop it a week later (via the "at" command). I undrop it so as to prevent my iptables blacklist from growing out of control and also to account for reassigning of dynamic IP addresses.

2) I keep a small database (using gdbm but sqlite would work too) of attempts. If the same IP fails to login past a preset level (say 3), I drop them for a week.

This is all done via some simple Python scripts (but any language would do) that are triggered from metalog via regular expressions. So far this has prevented any shared hosting box I manage from being brute-forced for almost 5 years now.