Writeup Of My 13 Jan 2009 Ownage

Introduction


On 13 January 2009, one of my (thankfully) less important servers
got owned (that is: rooted,
hacked, cracked, subverted (that was for search engines)). Fully
owned: the attackers had root access.

Eww.

It's worth noting here that I've been a sysadmin since about 1996,
and I'm very good at what I do; there really is no such thing as
being immune to attack if you want your computers accessible from
the Internet and useful for yourself and others. In this particular
case, in fact, I have deliberately chosen to not remove the
"vulnerability" in question.

In the hopes of it being useful to others, this is what happened as
best as I can reconstruct, and steps I took to help mitigate the
problem in the future.

Timeline Of My Experiences

  • 13 January 2009 Morning
    • I get in to work, and notice that my standing connection to the server in question (I'm a GNU Screen addict) is down.
    • The machine is not responding to SSH at all. All other services appear to be up. Logcheck is still sending mail.
    • In the logcheck output, I can see that sshd is trying to start, but failing. I can also see that cron-apt (automated apt updates) ran, but about 3 hours earlier.
    • Samhain says that /etc/ssh2 and related files has been created, and /usr/sbin/sshd has been changed. This could be cron-apt, but there were no ssh packages in there, and Debian claims that nothing it has creates /etc/ssh2/
    • I'm not completely sure I've been owned, but I'm suspicious.
  • 13 January 2009 Afternoon
    • My home network breaks down; ping times to other servers are > 1500ms. I call someone at home and have them power the affected server off. Ping times instantly drop to ~15ms. I am now pretty certain I was owned.
  • 13 January 2009 Evening
    • I investigate the server. I only become certain I was owned when I notice that "last" says that there was a login immediately before the /etc/ssh2 creation, but when I call the person (whom I trust enough to give sudo to), he denies having done any such thing.
    • Oh Dear.
    • On the advice of a friend, I pull the drive and attach it to another computer for forensics.


The friend who suggested the drive removal, btw, Tox, is a bad-ass security admin and has been extremely helpful throughout this process.

What Seems To Have Happened

  • 12-13 January 2009 Some Time
    • A host that a friend, we'll call him Leaky, uses gets owned. It's the host of Leaky's former employer, so it's not Leaky's fault. We'll call the host Leaky's Mail.
    • The attackers install a bogus sshd on Leaky's Mail. The bogus sshd collects passwords.
    • The attackers get Leaky's password.
  • 13 January 2009 07:14 or so
    • I trust Leaky implicitely, and he's a good sysadmin, so he's got sudo on my box.
    • The attackers log in to my machine as Leaky, and almost instantly run "sudo su -", which means no sudo logging, of which more anon. From here on in, everything they do is guesswork.
    • They install their bogus sshd as a binary (on Leaky's Mail they appear to have used source, since Leaky has said source). It fails, hence ssh going down. They never fix it.
    • They install, and presumably run, a botnet zombification script.
  • 13 January 2009 Afternoon
    • They decide to do something with their new bot; who knows what.

What They Left Behind

  • A modified version of ssh/sshd. AFAICT they didn't leave the source.
  • Botnet Zombie scripts. I actually managed to find the controlling IRC channel, which amused me.

What I Had Running That Helped

  • LogCheck, a program that mails you with anything that happens in any log that it doesn't specifically recognize as benign. It kept mailing me throughout the event, in fact, even when I couldn't access the machine.
  • Samhain, a program that keeps track of file states and alerts you when an important file has changed (in this case, the entire /etc/ssh2 directory and associated binaries in other places).

Mitigation Strategies


Well, I have no intention of turning off sudo for Leaky (which is
what I did the last time this happened, with someone other than
Leaky).

So I basically have two options for mitigation: restrict sudo so
Leaky can't do anything useful (I'd rather not) or accept that this
sort of thing is going to happen sometimes and try to produce as
much forensics as possible so I don't have to spend days trying to
figure out what happened. The faster I can figure out exactly what
happened, the faster I can repair the machine and move on.

I've very consciously chosen the latter: I'm making no real attempt
to actually close the "hole" in question, I'm simply turning the
logging up to eleven.

I am, in fact, using 3 disparate methods of logging all actions
taken by a root user (and most anyone else, in fact). Anyone who
identifies and fixes all 3 is a seriously motivated and skilled
attacker, and quite frankly that simply isn't in my threat model.
My threat model is script kiddies trying to make botnets (which is,
in fact, exactly what they were doing).

Process Accounting


Process accounting will show the programs run on your system and who
ran them, but not the arguments, in a wide variety of heniously
unreadable formats. "lastcomm" seems to be the least offensive. If
someone could explain to me why a shell script that runs for a
minute or so generates several hundred lines in lastcomm, I'd
love to hear it.

Snoopy Logger


Snoopy Logger is a
shared library that logs all exec calls and their arguments (up to
32 characters, which you can change if you're willing to recompile
the thing yourself) by sending them to syslog. You put it in your
ld.so.preload and voila: complete logging of every program run by
anyone on your system.

If you use logcheck or similar, you will want to very rapidly add an
ignore rule for Snoopy's output, because boy howdy is there a lot of
it.

WARNING: If you do not have log reaping turned on for
/var/log/auth.log (or wherever Snoopy's logging ends up) you will
regret it.

Modified Root Shells


This is by far the most complicated, because you have to get the
scripts just so, but the idea is you replace root's shell with a
script that runs "script" around the real shell. Here's a sample:

#!/bin/bash-static

unset SHELL
script -q -c "/bin/bash-static -l $*" -f /var/log/root_shells/$(basename $0).$$


(I'm of the opinion that root should never have a dynamically linked shell).

WARNING: If you do not do some kind of reaping for the output created
by the speciall script-wrapped shell, you will regret it.

A Puppet Module For All That Stuff


WARNING: I have no idea what this will do on a non-Debian
system.

WARNING: This module forcibly installs the sysklogd package,
which you may not want.

I've written a Puppet
module for all of the mitigation strategies described above; it also
manages sudo.
Here it is as a
tarball
. Unpack that in your Puppet "modules" directory and
"include root" in whatever classes you'd like to use it in.