Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: include ancestors in process events #2938

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

t0x01
Copy link

@t0x01 t0x01 commented Sep 19, 2024

Fixes 2420

Description

Reason: Option to include all ancestors of the process in process events can be very useful for observability and filtering purposes. E.g. to apply complex correlation rules later in data processing pipeline, or to filter out extra events.

Changes made:

  • Read and set option enable-process-ancestors from the config file. Turn option enable-process-ancestors off by default.
  • If option enable-process-ancestors is set, try to include ancestors (up to PID 1/PID 2) of the process beyond the immediate parent in process_exec, process_exit, process_uprobe, process_kprobe, process_lsm, process_tracepoint events in a respective protobuf message for the given process.
  • If option enable-process-ancestors is set, and we were unsuccessful when trying to collect process' ancestors in an event, add that event to eventcache for reprocessing.
  • When trying to reprocess events from eventcache, if option enable-process-ancestors is set, try to collect process' ancestors again.
  • Implement a new export filter that can filter over ancestor binary names using RE2 regular expressions.
  • Add a new test for the new export filter.
  • Add information about new features to documentation.

@t0x01 t0x01 requested review from a team and mtardy as code owners September 19, 2024 14:51
Copy link

netlify bot commented Sep 19, 2024

Deploy Preview for tetragon ready!

Name Link
🔨 Latest commit 815a7d7
🔍 Latest deploy log https://app.netlify.com/sites/tetragon/deploys/676430cf68480b000862fb3e
😎 Deploy Preview https://deploy-preview-2938--tetragon.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@mtardy mtardy requested a review from jrfastab September 23, 2024 10:31
@mtardy mtardy added the release-note/major This PR introduces major new functionality label Sep 23, 2024
@t0x01
Copy link
Author

t0x01 commented Sep 26, 2024

Hello.

Upd: found even more problems, converting back to draft for now.

@jrfastab
Copy link
Contributor

I'll look shortly sorry was travelling and then catching up. Should be able to get to this today or tomorrow thanks!

@t0x01 t0x01 marked this pull request as draft October 7, 2024 13:15
@t0x01 t0x01 force-pushed the pr/t0x01/process-ancestors branch from e2e9ea1 to 558ee86 Compare October 10, 2024 09:40
@t0x01 t0x01 marked this pull request as ready for review October 10, 2024 10:13
@t0x01
Copy link
Author

t0x01 commented Oct 10, 2024

I think i misunderstood the purpose of both MsgProcessCleanupEventUnix and refcnt initially, so i no longer call RefInc / RefDec for ancestors. Sorry for that.

So now there seems to be no real reason to return []*ProcessInternal in GetAncestorProcessesInternal instead of []*tetragon.Process. And, if i change that, it will solve the double loop problem as well. But i'm not sure if i should actually do that, because returning []*ProcessInternal may be beneficial in the future for reasons i don't yet see.

The biggest obstacle now is that due to current implementation of process cleanup, as described in commit 45745a0, it becomes impossible to reconstruct full process ancestry in some cases. Assume the following scenario:

  clone()             id=1
    exec()            id=2 [will also cleanup id=1]
    exec()            id=3 [will also cleanup id=2]
    ...
    exec()            id=n-1 [will also cleanup id=n-2]
    exec()            id=n [will also cleanup id=n-1]
  exit()              id=n

If n > 3, then all processes with id < n-1 would have their refcnt set to 0 and as a result removed from the event cache, breaking the ancestry chain. I don't think i can resolve that without introducing any potentially breaking changes, as this PR already has quite a lot of code to review.

And there is still inconsistency in the ancestors field's value across protobuf messages in api/v1/tetragon/tetragon.proto and vendor/github.com/cilium/tetragon/api/v1/tetragon/tetragon.proto. I'm still not sure how to properly handle that.

@jrfastab, may I ask you for a code review now? Sorry for the delay.

@t0x01 t0x01 force-pushed the pr/t0x01/process-ancestors branch from 558ee86 to 9332a4d Compare November 15, 2024 14:22
for process.process.Pid.Value > 2 {
if process, err = procCache.get(process.process.ParentExecId); err != nil {
logger.GetLogger().WithError(err).WithField("id in event", execId).Debugf("ancestor process not found in cache")
return nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we return ancestors we were able to get in here and just increase err metric ?

Copy link
Author

@t0x01 t0x01 Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it depends on whether we want to try to redo it later, if we were not able to do it this time. I chose to return nil here, so i could then add a condition to ec.Add() call in GetProcessExec(), GetProcessExit(), etc. that will add corresponding events to the event cache, if we were not able to get all ancestors.

Here:

if useCache {
	if ec := eventcache.Get(); ec != nil &&
		(ec.Needed(tetragonProcess) ||
			(tetragonProcess.Pid.Value > 1 && ec.Needed(tetragonParent)) ||
			(option.Config.EnableProcessAncestors && tetragonParent.Pid.Value > 2 && tetragonAncestors == nil)) {
		ec.Add(proc, tetragonEvent, event.Unix.Msg.Common.Ktime, event.Unix.Process.Ktime, event)
		return nil
	}
}

Later in Retry()/RetryInternal() i call eventcache.CacheRetries(eventcache.AncestorsInfo).Inc(), if GetAncestorProcessesInternal() returns nil again.

I guess, we could return both ancestors slice and an error in GetAncestorProcessesInternal(), but then what should we do in Retry()/RetryInternal()? Or maybe we shouldn't event bother to retry if we were unable to get all ancestors?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, it's fine

@@ -427,6 +470,20 @@ func (msg *MsgExitEventUnix) RetryInternal(ev notify.Event, timestamp uint64) (*
internal, parent := process.GetParentProcessInternal(msg.ProcessKey.Pid, timestamp)
var err error

if option.Config.EnableProcessAncestors && ev.GetAncestors() == nil &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we put this in function and call it from all the other places?

Copy link
Author

@t0x01 t0x01 Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could, but i might have to reimplement it due to the process cleanup problem. I've described it in the latest PR comment. Furthermore, I'm still not sure, what to do about that problem, as all my attempts to deal with it to this day were not very reliable. I would really appreciate an advice.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, yea.. seems like a problem.. not sure yet how to deal with that

Copy link
Author

@t0x01 t0x01 Nov 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess, i can make it so getCleanupEvent returns MsgProcessCleanupEventUnix only if MsgExecveEventUnix event's process has clone flag.

func (msg *MsgExecveEventUnix) getCleanupEvent() *MsgProcessCleanupEventUnix {
	flags := strings.Join(readerexec.DecodeCommonFlags(msg.Unix.Process.Flags), " ")
	if msg.Unix.Msg.CleanupProcess.Ktime == 0 || strings.Contains(flags, "clone") != true {
		return nil
	}
	return &MsgProcessCleanupEventUnix{
		PID:   msg.Unix.Msg.CleanupProcess.Pid,
		Ktime: msg.Unix.Msg.CleanupProcess.Ktime,
	}
}

Then in MsgExecveEventUnix event i can make it so parent.RefInc("parent") is called only, if tetragonProcess has any of clone / procFS flags.

if parent != nil && (strings.Contains(tetragonProcess.Flags, "clone") == true ||
		strings.Contains(tetragonProcess.Flags, "procFS") == true) {
	parent.RefInc("parent")
}

So then in MsgExitEventUnix event we would have it so all exec() processes would have their refcnt set to 1, clone() - to 0 and an actual parent - to 2. So we'll just have to call ancestor.RefDec("parent") for all ancestors, returned by GetAncestorProcessesInternal() with condition process.process.Pid.Value == tetragonProcess.Pid.Value, where process is the process inside GetAncestorProcessesInternal() and tetragonProcess is the process from MsgExitEventUnix event. So basically, GetAncestorProcessesInternal() would look like this:

// GetAncestorProcessesInternal returns a slice, representing a continuous sequence of ancestors of
// the process, including the immediate parent, that satisfy a given condition. The last element of
// the slice corresponds to the ancestor that no longer meets the given condition.
// The initial process, identified by the exedId parameter, must meet the condition as well.
func GetAncestorProcessesInternal(execId string, condition func(*ProcessInternal) bool) []*ProcessInternal {
	var process *ProcessInternal
	var err error

	if process, err = procCache.get(execId); err != nil {
		return nil
	}

	var ancestors []*ProcessInternal
	for condition(process) == true {
		if process, err = procCache.get(process.process.ParentExecId); err != nil {
			logger.GetLogger().WithError(err).WithField("id in event", execId).Debug("ancestor process not found in cache")
			return nil
		}
		ancestors = append(ancestors, process)
	}

	return ancestors
}

This approach has an obvious flaw, of course. If exec() ancestor with id=M is already removed from the process cache at that time for some reason, then we won't be able to call RefDec() for any of the previous M ancestors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tpapagian any idea on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will try to write down an example to understand exactly the situation and post that here. This may help to understand what we miss exactly.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, sorry. Currently, if i'm not mistaken, it should be like this:

  ...
  exec()    (id=1) (refcnt=1)
  ...
  exec()    (id=1) (refcnt=2)
  clone()   (id=2) (refcnt=1) - clone: process=1, parent++
  ...
  exec()    (id=1) (refcnt=2)
  clone()   (id=2) (refcnt=0) - cleanup: process--, parent--
    exec()  (id=3) (refcnt=1) - exec: process=1, parent++
  ...
  exec()    (id=1) (refcnt=1)
  clone()   (id=2) (refcnt=0)
    exec()  (id=3) (refcnt=1) - cleanup: process--, parent--
    exec()  (id=4) (refcnt=1) - exec: process=1, parent++
  ...
  exec()    (id=1) (refcnt=1)
  clone()   (id=2) (refcnt=0)
    exec()  (id=3) (refcnt=0)
    exec()  (id=4) (refcnt=1) - cleanup: process--, parent--
    exec()  (id=5) (refcnt=1) - exec: process=1, parent++
  ...
  exec()    (id=1) (refcnt=1)
  clone()   (id=2) (refcnt=0)
    exec()  (id=3) (refcnt=0)
    exec()  (id=4) (refcnt=0)
    exec()  (id=5) (refcnt=1) - cleanup: process--, parent--
    exec()  (id=6) (refcnt=1) - exec: process=1, parent++
  ...
  exec()    (id=1) (refcnt=1)
  clone()   (id=2) (refcnt=0)
    exec()  (id=3) (refcnt=0)
    exec()  (id=4) (refcnt=0)
    exec()  (id=5) (refcnt=0)
    exec()  (id=6) (refcnt=1) - cleanup: process--, parent--
    exec()  (id=7) (refcnt=1) - exec: process=1, parent++
  ...
  exec()    (id=1) (refcnt=1)
  clone()   (id=2) (refcnt=0)
    exec()  (id=3) (refcnt=0)
    exec()  (id=4) (refcnt=0)
    exec()  (id=5) (refcnt=0)
    exec()  (id=6) (refcnt=0)
    exec()  (id=7) (refcnt=0)
  exit()    (id=7) - exit: process--, parent--

Now, if i call RefInc for all ancestors inside each exec, it will look like this:

  ...
  exec()    (id=1) (refcnt=1)
  ...
  exec()    (id=1) (refcnt=2)
  clone()   (id=2) (refcnt=1) - clone: process=1, parent++
  ...
  exec()    (id=1) (refcnt=2)
  clone()   (id=2) (refcnt=0) - cleanup: process--, parent--
    exec()  (id=3) (refcnt=1) - exec: process=1, parent++, ancestors++
  ...
  exec()    (id=1) (refcnt=2)
  clone()   (id=2) (refcnt=0)
    exec()  (id=3) (refcnt=1) - cleanup: process--, parent--
    exec()  (id=4) (refcnt=1) - exec: process=1, parent++, ancestors++
  ...
  exec()    (id=1) (refcnt=3)
  clone()   (id=2) (refcnt=0)
    exec()  (id=3) (refcnt=1)
    exec()  (id=4) (refcnt=1) - cleanup: process--, parent--
    exec()  (id=5) (refcnt=1) - exec: process=1, parent++, ancestors++
  ...
  exec()    (id=1) (refcnt=4)
  clone()   (id=2) (refcnt=0)
    exec()  (id=3) (refcnt=2)
    exec()  (id=4) (refcnt=1)
    exec()  (id=5) (refcnt=1) - cleanup: process--, parent--
    exec()  (id=6) (refcnt=1) - exec: process=1, parent++, ancestors++
  ...
  exec()    (id=1) (refcnt=5)
  clone()   (id=2) (refcnt=0)
    exec()  (id=3) (refcnt=3)
    exec()  (id=4) (refcnt=2)
    exec()  (id=5) (refcnt=1)
    exec()  (id=6) (refcnt=1) - cleanup: process--, parent--
    exec()  (id=7) (refcnt=1) - exec: process=1, parent++, ancestors++
  ...
  exec()    (id=1) (refcnt=4)
  clone()   (id=2) (refcnt=0)
    exec()  (id=3) (refcnt=2)
    exec()  (id=4) (refcnt=1)
    exec()  (id=5) (refcnt=0)
    exec()  (id=6) (refcnt=0)
    exec()  (id=7) (refcnt=0)
  exit()    (id=7) - exit: process--, parent--, ancestors--

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me provide my thoughts. Let's assume the following in order to make things work:

  1. getAncestors from exec calls RefInc for each ancestor
  2. getAncestors from exit calls RefDec for each ancestor
  3. getAncestors from clone calls RefDec for each ancestor
  4. getCleanupEvent calls RefDec for each ancestor (except from the process and parent that it currently does)

Initially we have 1 process:

init    /usr/init pid=1 exec_id=1 refCnt=1 

After calling clone() on process with pid=1:

        /usr/init pid=1 exec_id=1 refCnt=2 [+1 from clone]
clone   /usr/init pid=2 exec_id=2 refCnt=1 [+1 from clone]

After calling clone() on process with pid=2:

        /usr/init pid=1 exec_id=1 refCnt=3 [+1 from getAncestors]
        /usr/init pid=2 exec_id=2 refCnt=2 [+1 from clone]
clone   /usr/init pid=3 exec_id=3 refCnt=1 [+1 from clone]

After calling exec("/usr/a") on process with pid=3:

        /usr/init pid=1 exec_id=1 refCnt=3 [+1 from getAncestors | -1 from getCleanupEvent]
        /usr/init pid=2 exec_id=2 refCnt=2 [+1 from getAncestors | -1 from getCleanupEvent]
        /usr/init pid=3 exec_id=3 refCnt=1 [+1 from exec         | -1 from getCleanupEvent]
exec    /usr/a    pid=3 exec_id=4 refCnt=1 [+1 from exec]

After calling clone() on process with pid=3:

        /usr/init pid=1 exec_id=1 refCnt=4 [+1 from getAncestors]
        /usr/init pid=2 exec_id=2 refCnt=3 [+1 from getAncestors]
        /usr/init pid=3 exec_id=3 refCnt=2 [+1 from getAncestors]
        /usr/a    pid=3 exec_id=4 refCnt=2 [+1 from clone]
clone   /usr/a    pid=4 exec_id=5 refCnt=1 [+1 from clone]

After calling exec("/usr/b") on process with pid=4:

        /usr/init pid=1 exec_id=1 refCnt=4 [+1 from getAncestors | -1 from getCleanupEvent]
        /usr/init pid=2 exec_id=2 refCnt=3 [+1 from getAncestors | -1 from getCleanupEvent]
        /usr/init pid=3 exec_id=3 refCnt=2 [+1 from getAncestors | -1 from getCleanupEvent]
        /usr/a    pid=3 exec_id=4 refCnt=2 [+1 from getAncestors | -1 from getCleanupEvent]
        /usr/a    pid=4 exec_id=5 refCnt=1 [+1 from exec         | -1 from getCleanupEvent]
exec    /usr/b    pid=4 exec_id=6 refcnt=1 [+1 from exec] 

After calling exit() on process with pid=4:

        /usr/init pid=1 exec_id=1 refCnt=3 [-1 from getAncestors]
        /usr/init pid=2 exec_id=2 refCnt=2 [-1 from getAncestors]
        /usr/init pid=3 exec_id=3 refCnt=1 [-1 from getAncestors]
        /usr/a    pid=3 exec_id=4 refCnt=1 [-1 from getAncestors]
        /usr/a    pid=4 exec_id=5 refCnt=0 [-1 from exit] [DELETED]
exit    /usr/b    pid=4 exec_id=6 refcnt=0 [-1 from exit] [DELETED]

After calling exit() on process with pid=3:

        /usr/init pid=1 exec_id=1 refCnt=2 [-1 from getAncestors]
        /usr/init pid=2 exec_id=2 refCnt=1 [-1 from getAncestors]
        /usr/init pid=3 exec_id=3 refCnt=0 [-1 from exit] [DELETED]
exit    /usr/a    pid=3 exec_id=4 refCnt=0 [-1 from exit] [DELETED]

After calling exit() on process with pid=2:

        /usr/init pid=1 exec_id=1 refCnt=1 [-1 from exit]
exit    /usr/init pid=2 exec_id=2 refCnt=0 [-1 from exit] [DELETED]

After calling exit() on process with pid=1:

exit    /usr/init pid=1 exec_id=1 refCnt=0 [-1 from exit] [DELETED]

As I initially said, in order to make that work, we need to do some modifications on getAncestors and getCleanupEvent. I haven't also thought of how this will work in the case when we need to add something in the eventcache. But we will find out those details if needed.

Does this make sense? Do I miss anything?

Copy link
Author

@t0x01 t0x01 Dec 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does make sense, yes. This was the first thing i've tried, and, unironically, i think it worked the best. I chose to try something else, because getAncestors and getCleanupEvent here do a lot of +1/-1 calls that basically just cancel each other. So we're kind of doing most of the work for no actual value.

Plus, i couldn't call RefInc/RefDec for ancestors right away inside GetAncestorProcessesInternal, because at that moment i couldn't guarantee, that i will be able to collect all of them there. So i looped over all ancestors at least 2 times: first time to collect them all, second - to call RefInc/RefDec for each.

The more processes i had with lots of ancestors, the less good it seemed. So i eventually decided to abandon that approach and try to at least call RefInc/RefDec only when i need to. Honestly, i'm not sure, if that was the right call.

I'll implement your suggestions and come back, thank you.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because getAncestors and getCleanupEvent here do a lot of +1/-1 calls that basically just cancel each other

Yes, I agree on that. But this is just a matter or increasing/decreasing a counter (i.e. minimal overhead) in order to make a simpler design. Making reference counting correct is a bin challenging so I would prioritize a simpler design here.

Honestly, i'm not sure, if that was the right call.

Not sure either to be honest. I just provided my thoughts on how this could work but I haven't spend time to understand all the details. So what I provided in my previous message could have issues as well when dealing with the eventcache etc.

I'll implement your suggestions and come back, thank you.

I would propose to spend some time first to understand all the details and please ask any questions here in case something does not make sense. Once everything is clear, you can proceed with the implementation.

process = GetProcess(ev)
var processes []*tetragon.Process
switch level {
case 0: // Process
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add const/iota for Process/Parent/Ancestors values

proc := ev.GetProcess()
parent := ev.GetParent()
tetragonProcess := ev.GetProcess()
tetragonParent := ev.GetParent()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this rename makes the whole change more complex, please keep proc and parent

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've renamed them for consistency purposes, mostly because GetProcessExec uses these names for the same objects. I.e proc for *process.ProcessInternal and tetragonProcess for *tetragon.Process.

In some functions process of type *process.ProcessInternal is named proc, in others - internal. And in MsgCloneEventUnix.Retry proc has type *tetragon.Process. It's kind of inconvenient.

Also, i would say that tracing.go is much easier to understand, as it uses the same naming in all functions. I'll revert these changes back, but for the future i think it would be better to make variables names more consistent.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I'd prefer the this rename change in separate change that woudn't change the behavior

return err
}
parent.RefInc("parent")
ev.SetParent(parent.UnsafeGetProcess())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, it looks like we can keep the same path as original code, just add eventcache.CacheRetries to the error path?

(ec.Needed(tetragonEvent.Process) || (tetragonProcess.Pid.Value > 1 && ec.Needed(tetragonEvent.Parent))) {
(ec.Needed(tetragonProcess) ||
(tetragonProcess.Pid.Value > 1 && ec.Needed(tetragonParent)) ||
(option.Config.EnableProcessAncestors && tetragonParent.Pid.Value > 2 && tetragonAncestors == nil)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this condition is used everywhere.. could we add ec.NeededAncestors ?

@@ -427,6 +470,20 @@ func (msg *MsgExitEventUnix) RetryInternal(ev notify.Event, timestamp uint64) (*
internal, parent := process.GetParentProcessInternal(msg.ProcessKey.Pid, timestamp)
var err error

if option.Config.EnableProcessAncestors && ev.GetAncestors() == nil &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, yea.. seems like a problem.. not sure yet how to deal with that

@t0x01 t0x01 force-pushed the pr/t0x01/process-ancestors branch from 9332a4d to bd44a87 Compare December 12, 2024 12:22
@t0x01
Copy link
Author

t0x01 commented Dec 12, 2024

Hello @olsajiri @tpapagian .

I made some changes based on your suggestions:

  • GetAncestorProcesses, GetAncestorProcessesInternal now return an ancestors slice and an error. If we encounter an error, the returned ancestors slice will contain all ancestors we were able to collect up to that moment.
  • Conditions for ancestors collection and adding events to the eventcache moved to NeededAncestors, NeededAncestorsInternal and NeededAncestors.
  • MsgExecveEventUnix and MsgCloneEventUnix events now increase reference counter of all ancestors if enable-process-ancestors option is set. MsgExitEventUnix and MsgProcessCleanupEventUnix - decrease reference counter of all ancestors if enable-process-ancestors option is set.

I also renamed some variables for the sake of consistency.

GetAncestorProcesses/GetAncestorProcessesInternal and NeededAncestors/NeededAncestorsInternal are there just because in some functions we only need *tetragon.Process ancestors, but not *process.ProcessInternal ancestors. This might not be the best approach, so, please, tell me if i need to change it.

@t0x01 t0x01 requested review from olsajiri and tpapagian December 12, 2024 12:35
@tpapagian
Copy link
Member

I made some changes based on your suggestions:

Thanks for the update, I will review that possibly on Monday.

Allow to include ancestors of the process beyond the immediate parent (up to PID 1 / PID 2) in process_exec, process_exit, process_uprobe, process_kprobe, process_lsm, process_tracepoint events via `--enable-process-ancestors` option. Turn `--enable-process-ancestors` option off by default.

Signed-off-by: t0x01 <[email protected]>
Implement a new export filter that can filter over ancestor binary names using RE2 regular expressions.

Signed-off-by: t0x01 <[email protected]>
Add information about ancestors, ancestor filter and ancestors related metrics to documentation.

Signed-off-by: t0x01 <[email protected]>
@t0x01 t0x01 force-pushed the pr/t0x01/process-ancestors branch from bd44a87 to 815a7d7 Compare December 19, 2024 14:42
Copy link
Member

@tpapagian tpapagian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking the time to do the updates. I believe that now the code seems simpler.

One thing that I would like to see is to have some tests that check the reference counts for the ancestors. For example in https://github.com/cilium/tetragon/blob/main/pkg/grpc/exec/exec_test_helper.go we define some tests that are called in https://github.com/cilium/tetragon/blob/main/pkg/grpc/exec/exec_test.go. In those cases, we check the reference counts (i.e. here).

As breaking reference counting can cause several issues (i.e. process information to be evicted from the cache earlier than needed and thus events lack important information, or in the opposite case where process information never gets evicted from the cache and this results in excessive memory usage) it would be great to add some tests for catching similar issues. As an example, you can simulate the case that we described in #2938 (comment) and after each event check that all reference counts are what we expect to see.

GetAncestorProcesses/GetAncestorProcessesInternal and NeededAncestors/NeededAncestorsInternal

I believe that in those case there is some code repetition that could be avoided. For example for GetAncestorProcessesInternal and GetAncestorProcesses you can keep only GetAncestorProcessesInternal as from a ProcessInternal struct we can get the process. Then, instead of GetAncestorProcesses you can call GetAncestorProcessesInternal and when you iterate the result just keep only what you need. Based on that we can possibly think of a similar solution for NeededAncestors/NeededAncestorsInternal but I haven't though of all the details.

@@ -415,17 +510,32 @@ func GetProcessExit(event *MsgExitEventUnix) *tetragon.ProcessExit {

type MsgExitEventUnix struct {
tetragonAPI.MsgExitEvent
RefCntDone [2]bool
RefCntDone [3]bool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure that having one more bool for all ancestors is a correct approach. We use index 0 for process and index 1 for parent and the reason is that we don't want to update refcnt multiple times when calling RetryInternal.

In the exit event for example, you could possibly refDec some of the ancestors and not all. So in the second RetryInternal call we will not know which of those we need to refDec. I may miss something in that case but worth thinking of that. A more reasonable approach for me could be to have one RefCntDone for each ancestor.

This applies also in the other cases where you add one more RefCntDone entry.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, i call RefDec for all ancestors only if i was able to collect them all.

if proc != nil && proc.NeededAncestors() && !msg.RefCntDone[AncestorsRefCnt] {
	if ancestors, perr := process.GetAncestorProcessesInternal(proc.UnsafeGetProcess().ParentExecId); perr == nil {
		var tetragonAncestors []*tetragon.Process
		for _, ancestor := range ancestors {
			tetragonAncestors = append(tetragonAncestors, ancestor.UnsafeGetProcess())
			ancestor.RefDec("ancestor")
		}
		ev.SetAncestors(tetragonAncestors)
		msg.RefCntDone[AncestorsRefCnt] = true
	} else {
		eventcache.CacheRetries(eventcache.AncestorsInfo).Inc()
		err = eventcache.ErrFailedToGetAncestorsInfo
	}
}

If GetAncestorProcessesInternal returns an error there, i do not decrement ancestors' refcnt at all.

Otherwise, if GetAncestorProcessesInternal was successful, msg.RefCntDone[AncestorsRefCnt] will be set to true and in the next iteration upper condition will result in false. So another RefDec for ancestors will not be called.

I thought of trying to decrement refcnt of ancestors even if i was not able to collect them all, but in that case in the next RetryInternal iteration i can find myself in a situation where i can't get any ancestors anymore. As i have already decremented some of their refcnt before and some of them could've ended up with their refcnt set to 0. So, an alternative can be to store an ExecID of the last ancestor we were able to find in the current iteration and to start collection from that in the next iteration.

I guess, i can try doing something like this then:

if msg.AncestorExecId != "" {
	execId = msg.AncestorExecId
} else if proc != nil {
	execId = proc.UnsafeGetProcess().ParentExecId
}

if execId != "" && proc != nil && proc.NeededAncestors() && !msg.RefCntDone[AncestorsRefCnt] {
	tetragonAncestors := ev.GetAncestors()
	ancestors, perr := process.GetAncestorProcessesInternal(execId)
	for _, ancestor := range ancestors {
		tetragonAncestors = append(tetragonAncestors, ancestor.UnsafeGetProcess())
		ancestor.RefDec("ancestor")
	}
	ev.SetAncestors(tetragonAncestors)
	if perr == nil {
		msg.RefCntDone[AncestorsRefCnt] = true
	} else {
		if len(ancestors) > 0 {
			msg.AncestorExecId = ancestors[len(ancestors)-1].UnsafeGetProcess().ExecId
		}
		eventcache.CacheRetries(eventcache.AncestorsInfo).Inc()
		err = eventcache.ErrFailedToGetAncestorsInfo
	}
}

Will that be better?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-note/major This PR introduces major new functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Tetragon is not showing process exec ancestors
5 participants