Fingerprinting Process Trees on Linux With Rust
Name fingerprinting is a cybersecurity forensics technique used to identify and track processes running on a computer system by using the process name or other identifiable information. This information could include the process’s file name, file path, command line arguments, and other identifying indicators of compromise .
Name fingerprinting is often used to identify known malicious or unwanted processes by comparing their names to a database of known names or patterns. For example, a name fingerprinting tool might look for specific file names
, such as evil.bin
or user32.dll.mui
, that are commonly used by malware. It can also be used to identify legitimate processes that are behaving unexpectedly or maliciously, by looking for the process name and the command line arguments used to launch the process.
It’s worth noting that name fingerprinting can be an effective technique for identifying known malicious or unwanted processes, but it can also be limited by the fact that attackers can change the process name or file name to evade detection. Additionally, it also can generate false positives, which can lead to unnecessary alerts or blockages. Therefore, it’s often used in conjunction with other techniques, such as process fingerprinting, to provide a more comprehensive approach to identifying and tracking processes.
Idea
Let’s take a typical systemd
process tree and try to find something
in it. Then ask yourself - how hard is it?
serge@satyricon:~$ pstree
systemd─┬─accounts-daemon───2*[{accounts-daemon}]
│ .. omitted for brevity
└─sh───node─┬─node─┬─bash
│ ├─bash───sudo───bash───something───2*[{something}]
│ └─12*[{node}]
└─node─┬─node───6*[{node}]
Now imagine, that Prometheus
presents it as /sshd/base/server.sh/sudo/base/something
tree in process
gauge. Double-nesting of node
processes is rolled up. systemd
is omitted because it is the mother of all dragons. This tool also exposes process_seconds
histogram. Isn’t it easy to spot what’s being launched on the server?
serge@satyricon:~$ curl http://localhost:9501/
process{state="RUNNING",tree="/sshd/base/pstree"} 1
process{state="RUNNING",tree="/sshd/base/server.sh/sudo/base/prom-cnproc"} 1
Kicking the tires
I’ve challenged this idea in the form of writing a Prometheus exporter for process trees started on Linux. One may wonder what processes are started on Linux machines and if things are expected. Generally, it’s difficult to see if the process is intended to be run or not. This utility aims at making low-overhead monitoring of every process launch to remove noisy parts of process trees. Events are provided through Linux kernel Process Events Connector . This small utility is the attempt to mine useful information about process trees in a concise and low-overhead method, running a Rust application in the user space. The other alternative could be built on top of the Extended Berkeley Packet Filter (eBPF) technology, but it’s kernel-space arcane magic. Why not? Well, better next time. eBPF programs have to be verified to not crash the kernel. Trying that in Rust is probably better with the bcc crate.
Rust is an interesting language to use for a research project like Linux process tree fingerprinting because it provides several key features that make it well-suited for this type of work. The core logic of this concept could be seen in the following that performs opinionated tokenization of a process to get a human-readable short name:
/// Compacts the name for presentation in monitoring
fn tree(pids: &HashMap<i32,Process>, pid: i32) -> String {
// start with a given pid
let mut curr = pid;
// initialize current process tree
let mut tree = vec![];
// tree entropy is minumum entropy of any paths of binaries executed in this process tree
let mut tree_entropy = std::f32::MAX;
// loop until init process
while curr != 0 {
trace!("tree curr={} {}", curr, tree.join("<"));
if let Some(prc) = pids.get(&curr) {
curr = prc.ppid;
// possible optimization: cache label and entropy per pid
let label = prc.label();
// call heuristics to detect random filenames
let path_entropy = prc.entropy();
if path_entropy < tree_entropy {
tree_entropy = path_entropy;
}
// collapse tree label from unwanted names
if tree.last() == Some(&label) {
continue;
}
// assume that almost every process tree is started via systemd
if label == "systemd" {
continue;
}
tree.push(label);
} else {
curr = 0
}
}
if tree_entropy < 0.022 {
// random prefix means that folder with binary was in random location
tree.push("random");
}
tree.reverse();
return format!("/{}", tree.join("/"));
}
The main logic depends on cnproc Rust crate. We update the internal state for every process start and finish, and ignore the other events :
pub fn main_loop(&mut self) -> ! {
loop {
if let Some(e) = self.monitor.recv() {
match e {
PidEvent::Exec(pid) => self.start(pid),
PidEvent::Exit(pid) => self.stop(pid),
_ => continue
}
}
}
}
Every time there’s a new process, we discover process parents and set process{state="RUNNING"}
gauge to one and process{state="STOPPED"}
gauge to 0, just to be sure that we properly initialize required states for something that resembles enums
in Prometheus world. I’ve picked the metrics
crate, as it was exposing macros, similar to the log
crate.
fn start(&mut self, pid: i32) {
let mut curr = pid;
while curr != 0 {
trace!("pid {} > curr {}", pid, curr);
if self.pids.contains_key(&curr) {
// eagerly break the cycle if parents
// were already discovered
break;
}
let prc = match Process::new(curr) {
Ok(it) => it,
Err(e) => {
warn!("pid {} > {}", curr, e);
break;
}
};
curr = prc.ppid;
self.pids.insert(prc.pid, prc);
}
let tree = tree(&self.pids, pid);
gauge!("process", 1.0, "tree" => tree.clone(), "state" => "RUNNING");
gauge!("process", 0., "tree" => tree.clone(), "state" => "STOPPED");
debug!("started pid={} tree={}", pid, tree)
}
This macro-based implementation of the Prometheus client was more intuitive than the official client crate. Prometheus is a pull-based system, meaning that the metrics are pulled by the Prometheus server, and not pushed by the monitored system , which is more efficient and scalable than push-based systems. Scalability was not my major concern here - I already monitor my tech stack with Prometheus, so exposing a server was natural:
let addr = "127.0.0.1:9501";
let addr: SocketAddr = addr.parse().expect("Unable to parse socket address");
let builder = PrometheusBuilder::new().listen_address(addr);
builder.install().expect("failed to install Prometheus recorder.");
Every time the process stops, we set process{state="RUNNING"}
gauge to zero and bump up process{state="STOPPED"}
gauge. We also increment the process_seconds
histogram, so that we collect insights on how long specific process tree usually runs.
fn stop(&mut self, pid: i32) {
if !self.pids.contains_key(&pid) {
// don't trigger for before unknown processes
return;
}
let tree = tree(&self.pids, pid);
let prc = self.pids.remove(&pid).unwrap();
let elapsed = prc.start.elapsed();
let seconds = elapsed.as_secs_f64();
gauge!("process", 0., "tree" => tree.clone(), "state" => "RUNNING");
gauge!("process", 1., "tree" => tree.clone(), "state" => "STOPPED");
histogram!("process_seconds", seconds, "tree" => tree.clone());
debug!("stopped pid={} tree={} duration={:?}", pid, tree, elapsed);
}
We use the /proc
filesystem
to get information, like command line arguments, parent process ID, or executable. Other Linux utilities, like top
or ps
, rely on this filesystem to get process metadata on the lowest level. I decided not to pick procfs
crate for the sake of keeping the number of direct dependencies lower.
impl Process {
pub fn new(pid: i32) -> Result<Self> {
let start = Instant::now();
let argv = cmdline(pid)?;
let ppid = ppid(pid)?;
let exe = Path::new(&format!("/proc/{}/exe", pid)).read_link()?;
let exe = exe.canonicalize()?;
trace!("{} pid={} ppid={} took={:.2?}",
exe.to_str().unwrap_or("..."), pid, ppid, start.elapsed());
Ok(Process{pid, ppid, argv, exe, start})
}
//..
}
One of the heuristics baked into this utility relies on detecting whether or not the executable was running from a random path. This may hint at something downloaded from the network or created as the result of some application executions. This may not be the best heuristic, but it is way better than cardinality explosion due to random values included in process tree fingerprints and making it harder to find a needle in the haystack.
/// Returns minimum metric entropy of any path element
pub fn entropy(&self) -> f32 {
let mut path_entropy = std::f32::MAX;
let actual = self.actual_runnable();
let mut elems = actual.split("/");
for chunk in &mut elems {
let entropy = metric_entropy(chunk.as_bytes());
trace!("entropy {}={}", chunk, entropy);
if entropy < path_entropy {
path_entropy = entropy;
}
}
path_entropy
}
Whenever we launch a Python or Bash script, we’re interested in the name of the script, not the fact that /bin/sh
is called. This means that cron job python /tmp/ZW50cm9weQo/top.py
should appear as /random/crond/top.py
, where /random
would mean a high-entropy folder name, where the script is located. It’s Linux and a lot of executables are not binaries, but Shell or Python scripts. For those, we’re not interested in what specific Python processes were running, but rather what specific Python scripts were executed. That way we can derive some useful process tree fingerprints, as names like shell.py
or listener.py
may hint at the executable intent.
/// Determines actual runnable file - binary or script
fn actual_runnable(&self) -> &str {
let sh = self.is_shell();
let py = self.is_python();
let has_args = self.argv.len() > 1;
if (sh || py) && has_args {
let maybe_script = self.argv[1].as_str();
// or should it be just regex?..
let path = Path::new(maybe_script);
if path.is_file() {
return maybe_script;
}
}
self.exe.to_str().unwrap_or("/")
}
The last important dimensionality reduction
heuristic relies on hand-crafting the list of binaries in the base distribution image, through commands like find /usr/sbin -type f | xargs realpath
. Whenever a basic Linux binary
is called, it’ll be aliased as base
in the tree name. It simply means, that instead of apt-get
, bash
, or ls
, we’ll just see base
labels, making process tree fingerprints less noisy.
/// Determines short label to include in process tree
pub fn label(&self) -> &str {
let path = self.actual_runnable();
if is_base(path) {
// base system may have plenty of scripts
return "base";
}
// maybe this will be improved
let filename = path.split("/").last().unwrap_or("/");
filename
}
How you can use it?
Please star and fork the nfx/prom-cnproc repository (MIT license). All the work is only the initial prototype and you should use it at your own risk. I’ve been running this for more than a year on my security research sandboxes. The resulting 500kb binary has no dependencies and runs almost without an overhead. The process has to be run as root because it seems to be no other way to listen for a corresponding NetLink socket. If there’s a way to improve it — I’d be happy to get a pull request for this.
Download a release package for your architecture
and install it as dpkg -i prom-cnproc_0.1.0_amd64.deb
. If you’d be interested in seeing some debug information from the binary, RUST_LOG=trace
will give you most of the info. Currently, the HTTP server will listen on localhost:9501
and there’s no way to specify it as a configuration yet. Once this exporter process is running, point your Prometheus to it.
Whenever you’re missing some features (or don’t trust the released binaries), please build from the source. I’ve used the following release commands:
apt-get install libc6-dev-i386
cargo install cargo-deb
cargo deb --target=aarch64-unknown-linux-gnu
cargo deb --target=x86_64-unknown-linux-gnu
Wish you a happy kicking!