Hey! thanks for publishing my tool, and thanks everybody for the great feedback here. Just started addressing some of your points.
Anyway, my need for the tool was mostly because of these few points:
- scripting can be much easier with psc, especially when you can output what you want
- ebpf iterators are so flexible: we can get anything that is defined in the task_struct that is not even exposed in the proc filesytem if we want. This alone makes the tool extremely powerful, with a reasonable amount of effort for just adding a new field
- I really like querying my system with a simple language. Sometimes I tend to forget about specific ss, lsof, or ps options. In this way, it's much easier for me to get what I need
- no traditional tooling has native container context. It can be extended to even retrieve data from the kubelet, for instance, but I'll think about it
Feel free to reach out if you have any particular need
I've played with bpf iterators and wrote a post about them [1]. The benefit of iterating over tasks instead of scanning procfs is a pretty astounding performance difference:
> I ran benchmarks on current code in the datadog-agent which reads the relevant data from procfs as described at the beginning of this post. I then implemented benchmarks for capturing the same data with bpf. The performance results were a major improvement.
> On a linux system with around 250 Procs it took the procfs implemention 5.45 ms vs 75.6 us for bpf (bpf is ~72x faster). On a linux system with around 10,000 Procs it took the procfs implemention ~296us vs 3ms for bpf (bpf is ~100x faster).
And with eBPF iterators you can bail out early and move to next if you see a non-interesting item (or one that should be filtered out) instead of emitting textual data of all items and later grepping/filtering things out in post-processing.
I use early bailout a lot (in 0x.tools xcapture) when iterating through all threads in a system and determining which ones are “active” or interesting
procfs and "everything is a file" is up there with fork on the "terrible useless technology that is undeservedly revered".
# Find processes connected to a specific port
psc 'socket.dstPort == uint(443)'
# Filter by PID range
psc 'process.pid > 1000 && process.pid < 2000'
It seems weird to require the user to remember that ports have to be marked uint when it doesn't look like anything else does.
PIDs haven't been limited to 16-bits for a long time. I guess the default integer in these things is 32-bit signed.
But, yeah, this could be solved if uint promoted to larger for the comparison.
I like this tool. I just replaced a multi-step script to find running processes with deleted files open (e.g., updated shared library or binary) that used to be as follows:
- grep /proc/*/maps for " (deleted)" (needs root)
- exclude irrelevancies like paths starting with "/memfd:" (I have lots of other similar exclusions) with grep -v
- extract the pid from the filename part of grep's output with sed
- for each pid, generate readable output from /proc/$pid/cmdline (which is NUL separated) with tr, xargs, bash printf
- show the pid, cmdline, file path
Yes, this is what needs-restarting does too.
With this tool, this pipe chain is now just:
doas psc -o "process.pid,process.cmdline,file.path" \
'file.path.endsWith(" (deleted)") && !file.path.startsWith("/memfd:") && !...' \
| sed 1d
This is neat but the examples comparing the tool against piping grep seem to counter the argument to me. A couple of pipes to grep seems much easier to remember and type, especially with all the quotes needed for psc. For scripts where you need exact output this looks great.
I’m the opposite - I much prefer a structured query language (ahem) for this type of thing. If I’m looking at someone’s (ie my own 6 months later) script I much prefer to see the explicit structure being queried vs “why are we feeling for foo or grabbing the 5th field based on squashed spaces as the separater”.
Nice use of CEL too. Neat all around.
Thanks for including so many examples! Perhaps include one example output. Other than mention of the optional '--tree' parameter, it's unclear if the default result would be a list, table, JSON, etc.
I'm not convinced with the need to embed CEL. You could just output json and pipe to jq.
Sounds less efficient in both space and time.
I guess it's a matter of muscle memory and workflow. It's nice to have options.
An unfortunate name that triggers everybody who’s ever worked at Meta :)
Their first example is bad:
ps aux | grep nginx | grep root | grep -v grep
can be done instead (from memory, not at a Linux machine ATM):
The commands in their example are not equivalent. The ps | grep thing searches the full command line including argument while ps -C (and, presumably, the psc thing) just returns the process name.
Should you for some reason want to do the former, this is easiest done using:
pgrep -u root -f nginx
which exists on almost all platforms, with the notable exception of AIX.
Their other slightly convoluted example is:
psc 'socket.state == established && socket.dstPort == uint(443)'
which is much more succinct with:
lsof -i :443 -s TCP:ESTABLISHED
It has process.cmdline as well as .name
Many new tools appear because people don't know how to use the existing tools or they think the existing tool is too complicated. In time the new tool becomes just as, or more, complicated than the old tool. Because there is a reason the old tool is complicated, which is that the problem requires complexity.
“ss” also has filters, no need for grep
ss -o state established '( dport = :ssh or sport = :ssh )'
> psc uses eBPF iterators to read process and file descriptor information directly from kernel data structures. This bypasses the /proc filesystem entirely, providing visibility that cannot be subverted by userland rootkits or LD_PRELOAD tricks.
Is there a trade off here?
I found this justification dubious. To me the main reason to use eBPF is that it gives more information and is lower overhead.
Hey! thanks for publishing my tool, and thanks everybody for the great feedback here. Just started addressing some of your points.
Anyway, my need for the tool was mostly because of these few points:
- scripting can be much easier with psc, especially when you can output what you want
- ebpf iterators are so flexible: we can get anything that is defined in the task_struct that is not even exposed in the proc filesytem if we want. This alone makes the tool extremely powerful, with a reasonable amount of effort for just adding a new field
- I really like querying my system with a simple language. Sometimes I tend to forget about specific ss, lsof, or ps options. In this way, it's much easier for me to get what I need
- no traditional tooling has native container context. It can be extended to even retrieve data from the kubelet, for instance, but I'll think about it
Feel free to reach out if you have any particular need
I've played with bpf iterators and wrote a post about them [1]. The benefit of iterating over tasks instead of scanning procfs is a pretty astounding performance difference:
> I ran benchmarks on current code in the datadog-agent which reads the relevant data from procfs as described at the beginning of this post. I then implemented benchmarks for capturing the same data with bpf. The performance results were a major improvement.
> On a linux system with around 250 Procs it took the procfs implemention 5.45 ms vs 75.6 us for bpf (bpf is ~72x faster). On a linux system with around 10,000 Procs it took the procfs implemention ~296us vs 3ms for bpf (bpf is ~100x faster).
[1] https://www.grant.pizza/blog/bpf-iter/
And with eBPF iterators you can bail out early and move to next if you see a non-interesting item (or one that should be filtered out) instead of emitting textual data of all items and later grepping/filtering things out in post-processing.
I use early bailout a lot (in 0x.tools xcapture) when iterating through all threads in a system and determining which ones are “active” or interesting
procfs and "everything is a file" is up there with fork on the "terrible useless technology that is undeservedly revered".
PIDs haven't been limited to 16-bits for a long time. I guess the default integer in these things is 32-bit signed.
But, yeah, this could be solved if uint promoted to larger for the comparison.
I like this tool. I just replaced a multi-step script to find running processes with deleted files open (e.g., updated shared library or binary) that used to be as follows:
- grep /proc/*/maps for " (deleted)" (needs root)
- exclude irrelevancies like paths starting with "/memfd:" (I have lots of other similar exclusions) with grep -v
- extract the pid from the filename part of grep's output with sed
- for each pid, generate readable output from /proc/$pid/cmdline (which is NUL separated) with tr, xargs, bash printf
- show the pid, cmdline, file path
Yes, this is what needs-restarting does too.
With this tool, this pipe chain is now just:
This is neat but the examples comparing the tool against piping grep seem to counter the argument to me. A couple of pipes to grep seems much easier to remember and type, especially with all the quotes needed for psc. For scripts where you need exact output this looks great.
I’m the opposite - I much prefer a structured query language (ahem) for this type of thing. If I’m looking at someone’s (ie my own 6 months later) script I much prefer to see the explicit structure being queried vs “why are we feeling for foo or grabbing the 5th field based on squashed spaces as the separater”.
Nice use of CEL too. Neat all around.
Thanks for including so many examples! Perhaps include one example output. Other than mention of the optional '--tree' parameter, it's unclear if the default result would be a list, table, JSON, etc.
I'm not convinced with the need to embed CEL. You could just output json and pipe to jq.
Sounds less efficient in both space and time.
I guess it's a matter of muscle memory and workflow. It's nice to have options.
An unfortunate name that triggers everybody who’s ever worked at Meta :)
Their first example is bad:
can be done instead (from memory, not at a Linux machine ATM): which is arguably better than their solution:The commands in their example are not equivalent. The ps | grep thing searches the full command line including argument while ps -C (and, presumably, the psc thing) just returns the process name.
Should you for some reason want to do the former, this is easiest done using:
which exists on almost all platforms, with the notable exception of AIX.Their other slightly convoluted example is:
which is much more succinct with:It has process.cmdline as well as .name
Many new tools appear because people don't know how to use the existing tools or they think the existing tool is too complicated. In time the new tool becomes just as, or more, complicated than the old tool. Because there is a reason the old tool is complicated, which is that the problem requires complexity.
“ss” also has filters, no need for grep
ss -o state established '( dport = :ssh or sport = :ssh )'
> psc uses eBPF iterators to read process and file descriptor information directly from kernel data structures. This bypasses the /proc filesystem entirely, providing visibility that cannot be subverted by userland rootkits or LD_PRELOAD tricks.
Is there a trade off here?
I found this justification dubious. To me the main reason to use eBPF is that it gives more information and is lower overhead.
It requires root
Running eBPF programs doesn't strictly require root.
It requires cap_bpf which is considered a high privileged capability.
So yes, it requires root in the sense of what people mean by root.
You can also enable unpriviledged ebpf.
how about comparing it to something sensible like osquery instead of doing silly strawman ps pipelines