Copy Link
Add to Bookmark
Report

The Sleuth Kit Informer Issue 10

eZine's profile picture
Published in 
Sleuth Kit Informer
 · 10 months ago

http://www.sleuthkit.org/informer
http://sleuthkit.sourceforge.net/informer

Brian Carrier
carrier at sleuthkit dot org

Issue #10
November 16, 2003

Contents

  • Introduction
  • What's New?
  • UNIX Incident Verification with The Sleuth Kit

Introduction

For quite a while now, I have been meaning to do a series of articles on incident response using The Sleuth Kit and Autopsy and the recent Honeynet Scan of the Month #29 has motivated me to finally do it. The main article in this issue of the Sleuth Kit Informer covers the basics and focuses on what I call the verification phase of incident response. The goal of this phase is to verify an alert or report and assess the scope of the incident so that you can decide if a full investigation should occur.

This article will only utilize The Sleuth Kit tools and not Autopsy. I will be adding new functionality to Autopsy (which I have been saying for over a year) that allows it to be more easily used for incident response.

What's New?

On November 15, new versions of The Sleuth Kit and Autopsy were released. Autopsy had only minor error handling problems fixed and had some interface improvements. The Sleuth Kit had two bug fixes for the NTFS code, added support for Solaris disk labels on the Intel platform, and a new flag for 'icat' allows slack space to be displayed with the file. The 'ffind' tool also now reports the attribute for an NTFS image. This was identified as a problem with the NTFS keyword search image that I released to the CFTT mailing list:
http://dftt.sourceforge.net/test3/

UNIX Incident Verification with The Sleuth Kit

Introduction

Your pager starts to vibrate and a message from the response team coordinator appears. An alert has been generated for one of the Linux web servers and she needs you to investigate it. It is a public facing server, so you will need solid evidence of an incident before you can remove it from the network. Your dilemma is that you need to quickly determine if the system has been compromised, but you know that carelessly looking at files and running commands could result in the loss of evidence and if malicious programs have been installed then you could get false information or the programs could destroy the system.

This article will show steps that can be taken with The Sleuth Kit in this scenario. The focus will be on minimizing the amount of data that is written to the system, minimizing the trust that is placed in the system, and minimizing the opportunities for the suspect system to generate false data. The Sleuth Kit is useful during an incident verification and live analysis because it is command line-based, can be compiled statically (on many systems), and can bypass the kernel functions that are typically modified by rootkits.

This is the first article in a series on incident response. This article only covers the verification from the file system point of view. Other techniques will allow you to verify the incident based on the processes that are running and the ports that are open, but they won't be covered here.

The first section of this article will outline the basic goals of incident verification, the second section will briefly discuss trusted CDs, the third section will describe how we can minimize writing data to the suspect disk, the fourth section covers why The Sleuth Kit is useful in this phase, and and the final section will describe how we can use The Sleuth Kit to verify an incident.

This article uses results from the recent Honeynet Scan of the Month Challenge [1], which was of a compromised Linux system.

Goals and Requirements of Incident Verification

My recent research at CERIAS has forced me to focus on definitions [2], so I'm going to first define what some terms mean to me. Incident response is the process of receiving an incident alert about a system, verifying the alert and assessing the system, and getting the system back to a known and trusted state. During the process, decisions will be made about if a full investigation should occur (i.e. computer forensics) or if the system should just be rebuilt.

Incident verification is the process of confirming or rejecting an incident alert or report so that a decision about the investigation scope can be made. When the verification process starts, it is unknown if a full investigation will be warranted so we must assume that one will occur. In a general sense, incident verification can verify that an employee violated a corporate policy, can verify that a person on parole has violated his release terms, or can verify that a server has been compromised. This article is going to focus on the latter situation because it is the situation where it is most likely that the system has been configured with malicious programs.

I use two guidelines for this process:

  1. Minimize the amount of trust that you place in the system so that more accurate information is collected.
  2. Minimize the amount of data that you change on the system so that evidence is preserved.

I will outline how (I think) you can handle these guidelines in the following sections.

Making a CD of Trusted Tools

The first step towards minimizing the amount of trust that you place in the suspect system is to use your own tools. So, burn a CD with trusted copies of the tools you will need. Computer programs typically rely on libraries that are on the local system. Those can be modified by an attacker, so it is best to get tools that have been statically compiled. Unfortunately, this is not always easy. Some OSes will not allow you to make them.

To make static binaries for The Sleuth Kit, type 'make static' instead of the usual 'make'. As I previously stated, this will not work on all systems. For a file system analysis, the CD should have netcat, the non-Perl tools from The Sleuth Kit, and 'mount'. The CD should also have non-file system tools such as 'ps', 'netstat', 'lsof', 'ifconfig', 'ls' etc.

Getting Data Off of the Suspect System

The CD of trusted tools helps us achieve guideline #1, so now we will look at guideline #2. The most obvious method of minimizing the amount of data that you change on the system is to not write data to the system and instead write it somewhere else. This section covers how you can write any standard output data to another system using netcat [3]. The techniques covered in section are not unique to The Sleuth Kit and can be used with an command line tool.

netcat is a command line network utility that can be used as both a client and a server. netcat will be run as a server on your trusted system where you want to write data to, your laptop for example. We'll call this our evidence server. To put netcat into server mode, use the '-l' flag to make it listen and specify the port to listen on with '-p PORT'. All data received on that port will be displayed, so you should redirect the output to a file. For example, to listen on port 9000 and save the data to ps.dat:

# nc -l -p 9000 > ps.dat

On the suspect system (the system you don't want to write data to), run a data collection command as normal and pipe the output to a trusted version of netcat that has been statically compiled and installed on a CD. netcat will run as a client by default, so no special flags are needed other than the address of the server and the port number. netcat will not close the connection until you manually do it, but you can provide the '-w TIMEOUT' flag to specify the number of seconds for netcat to wait before it closes the connection after data flow stops. For example, to run 'ps' and send the data to 10.1.32.55 on port 9000 and close the connection 3 seconds after the no more data is sent:

# ./ps | ./nc -w 3 10.1.32.55 9000

Note that these were run from a trusted CD (as previously discussed). The server connection must be started before the client is run. This technique allows you to save any data from a command line tool to another system. If you want to save a log file, you can also use 'netcat':

# ./nc -w 3 10.1.32.55 9000 < /var/log/messages

Be sure to calculate MD5 hashes for all files that are saved to the evidence server.

Why Use The Sleuth Kit?

Before we get into the details of using The Sleuth Kit for incident verification, let me first try and convince you why it is more useful than normal utilities.

Guideline #1 tells us to reduce the trust in the system. Many compromises involve some type of rootkit that is installed on the system. The rootkit modifies either system executables or the file system and process code in the kernel so that files and processes are hidden from the system administrator. The Sleuth Kit does not use the file system code in the kernel, and therefore the results are not affected by the malicious modifications. All The Sleuth Kit needs is access to the raw partition device and it will show the files that would normally be hidden by the rootkits. An example from the SOTM will be shown in the next section.

Guideline #2 tells us to reduce the data we write to the system. The Sleuth Kit is useful here because all output is sent to Standard Out, so we can pipe it to netcat and to an evidence server for analysis. We can also utilize The Sleuth Kit to reduce the modifications to files that we examine. When we look at the contents of a file with 'cat' or 'less', then the A-Time is modified. If we run 'find' on the entire directory tree, then the A-Time of every directory is updated and the A-Time of every file may be updated too depending on what the 'find' command was for. I will show some techniques that achieve the same functionality as 'find', but do not modify the A-Times. The only A-Time that needs to be updated with The Sleuth Kit is for the directory where the CD was mounted and the raw partition devices.

Verifying the Incident From the File System Perspective

This section will explain some techniques for verifying an incident using The Sleuth Kit. Keep in mind that there are many techniques that do not use the file system and therefore are not performed with The Sleuth Kit (so we won't cover them here). A future article may focus on those if there is interest.

The first step is to make a full file listing of the system. There are two reasons for this:

  1. It saves the meta data information so that if any data changes during the verification process (such as an A-Time), then we will always have the original values as backup.
  2. We will be using the listing to find evidence of an incident. Many things that can be done with 'find' on the suspect system can be done with the file listing.

The file listing is created with the 'fls' tool in The Sleuth Kit and the '-r' and '-m' flags are given. The '-r' flag tells 'fls' to list recursive directories and the '-m' flag takes the mounting point as an argument and displays all of the meta data details in the format that can be used to make a timeline with 'mactime'.

Therefore, consider that we have two partitions on the suspect Linux web server. We will need to run 'fls' on each of them and send the data to our trusted evidence server:

# ./fls -f linux-ext3 -r -m / /dev/hda8 | ./nc -w 3 10.1.32.55 9000 
# ./fls -f linux-ext3 -r -m /usr/ /dev/hda5 | ./nc -w 3 10.1.32.55 9000

Now we have a snapshot of all MAC times on the system. The only thing that we have modified on the system is the A-Time on the mount point for the CD, the A-Times on any libraries needed by our tools (if they were not statically compiled), and the A-Time on the raw device for each of the partitions.

If the incident involves a compromised system, then a rootkit may have been installed. The 'chkrootkit' tool can be used to check for rootkits, but it relies on the suspect system and can modify the A-times on files. We can do some basic, non-invasive checks on our file listing before we run 'chkrootkit'[4].

I am going to assume that the output from 'fls' was combined into one big file, 'fls.out'. This file should contain an entry for every file on the system (maybe even some for deleted files). The first rootkit detection technique will be to examine the '/dev/' directory for regular text files instead of character and block devices. Configuration files are commonly hidden there. So, we use 'grep' to extract the '/dev/' directory entries from the 'fls' file.

# grep "|\/dev\/" fls.out > fls-dev.out

The above command has the '|' symbol to force the output to only have '/dev/' as the first directory and not something like '/usr/local/dev/'. The columns in the 'fls.out' file are separated by the '|' symbol. We then make a timeline of the data using 'mactime':

# mactime -b fls-dev.out > timeline-dev

When we view the timeline, we can search for entries that have a regular file type, which will have '-/-' in the mode column. You can either open the timeline in 'vi' or 'less' and search for it, or you can 'grep' it (with certain values escaped):

# grep "\-\/-" timeline-dev

This frequently finds the '/dev/MAKDEV' file, which is normal, but others should be considered suspect. For the SOTM #29 incident, the following files were found with this method:

  • /dev/ttyop
  • /dev/ttyoa
  • /dev/ttyof
  • /dev/hdx2
  • /dev/hdx1

Another data hiding technique for rootkits is to create files that begin with a '.'. We can find those from the 'fls' output using grep:

# grep "\/\." fls.out | less

This will give us a listing of the files and directories that have a '/.' in the file name. On some systems, this could be a very long list of valid names. Any non-standard names should be considered suspect. The SOTM #29 incident had a '/lib/.x/' directory that was found in this process. In the SOTM image, some of the files that have been shown were supposed to be hidden by the trojan '/bin/ls' executable.

Another useful technique is to create a timeline of file activity around the time of the 'event' that triggered the verification process. Keep in mind that the times could have been changed by the attacker. To make the full timeline, use:

# mactime -b fls.out > timeline-all

Examine the file for "suspicious" activity and when a suspect file is encountered in the verification process, it can be saved to the evidence server using 'icat' and netcat. The inode value can be found in the second to last column of the timeline and fourth column in the 'fls.out' file. When the inode value has been determined (312 for example), the following is used on the suspect system (with the netcat listener on the server):

# ./icat -f linux-ext3 /dev/hda8 312 | ./nc -w 3 10.1.32.55 9000

This is useful to collect log files, regular files from '/dev/', and system binaries that are commonly modified to hide data (/bin/ls, /bin/ps, /bin/netstat etc.). More detailed analysis of the logs and binaries can be done on the trusted system and this reduces the impact on the suspect system and reduces the trust you place in it. Another technique is to collect the log files using 'icat' as one of the first steps so that the logs do not get overwritten or deleted by the attacker.

Conclusion

This article has shown the basics for using The Sleuth Kit to verify an incident from the file system perspective. The information from this analysis combined with information about running processes and network work ports could help you determine if the system has been compromised. This has focused on the scenario of a system compromise, but the same general techniques can apply to other situations where you do not want to modify or trust the local system.

One of my development goals has been to add Incident Response functionality into Autopsy that will allow a responder to place a CD into the suspect system and connect to it with their laptop and HTML browser. This can currently be done by modifying some configuration settings by hand, but the process will be automated in the future. That should be completed in the next couple of months and there will be an Informer article on it.

You may have noticed that many of the techniques outlined here can be automated. Scripts can be developed for the evidence server that search for regular files or '.' files. Scripts can be make with 'ifind' to find the inode of a file and automatically save it (such as log files and system binaries). If you are interested in writing up scripts to automate the process described above, it would be appreciated. I can include them in the distribution if you are interested. Other techniques from 'chkrootkit' can also likely be applied to the file listing.

Lastly, this article has focused on Unix systems because that is what The Sleuth Kit runs on. Another To Do item has been to compile these tools for Windows and see if we get the same results. If anyone is a Windows guru and wants to help port it, please do. It shouldn't be that hard because The Sleuth Kit only uses standard C methods.

References

Copyright © 2003 by Brian Carrier. All Rights Reserved

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT