-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Creating a Custom Resource Definition for a pod IP to metadata mapper. #1071
Conversation
This commit: - Initializes a Project with Name podinfo and repo github.com/cilium/tetragon - Provide builderplate code for creating the operator Signed-off-by: Prateek Singh <[email protected]>
This commit: - Defines the Custom Resource. Signed-off-by: Prateek Singh <[email protected]>
By default, a custom resource has spec (desired state) as well as status (current state) fields and Kubebuilder auto-generates manifests and deepcopy files for it. But, for out PodInfo resource we don't want the status info, so we need to remove it This commit: - Removes the //+kubebuilder:subresource:status comment from podinfo_types.go so kubebuilder doesn't auto generate things regarding the status field. - make manifests and make generate commands create updated manifests and deepcopy files. Signed-off-by: Prateek Singh <[email protected]>
lgtm overall. a few comments
|
|
Hi, One small comment from my side, would it make sense to add the files into pkg/k8s instead in
😅 |
yeah that's also an option. we ended up going with |
|
✅ Deploy Preview for tetragon ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
…ot ginkgo Signed-off-by: Prateek Singh <[email protected]>
- Updated the spec field - Added the status field - Generated manifests Signed-off-by: Prateek Singh <[email protected]>
I made the changes suggested, please have a look. Should I change the name to TetragonPod right away or later ? |
hostNetwork: | ||
description: Host networking requested for this pod. Use the host's | ||
network namespace. If this option is set, the ports that will be | ||
used must be specified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe my comment is stupid: doc says "the ports that will be used must be specified" but doesn't specify where or how? maybe if it makes sense add it here too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for pointing that out. I will make the changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't mad the changes there yet, will do right before the PR is ready.
Tasks:
|
Signed-off-by: Prateek Singh <[email protected]>
…e operator to register the CRD with kubernetes API. - It has an embed.go file that will include the YAML file in the binary. - Created a register.go that will take that YAML file create a CRD object and return. Signed-off-by: Prateek Singh <[email protected]>
…ator - renamed the register package to client package, and wrote logic that will register the CRD with the API server - Used the tetragon client package to register the CRD Signed-off-by: Prateek Singh <[email protected]>
|
||
func RegisterCRD(client *apiextclientset.Clientset) error { | ||
crd := GetCRD() | ||
_, err := client.ApiextensionsV1().CustomResourceDefinitions().Create(context.TODO(), crd, metav1.CreateOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will fail if the CRD already exists. we need to call Update() if the CRD already exists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh thanks for pointing that out, I totally missed that. Working on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made changes, the current implementation is like this:
- The CRD is created only if it doesn't already exists.
- If the CRD already exists, no changes are made.
I am having a few questions regarding the update logic, like how would it know that CRD needs to be updated ? I went through the TracingPolicy CRD logic, how it is handling it, but have a few doubts. Let's have a little discussion on it in the meeting today.
Signed-off-by: Prateek Singh <[email protected]>
…n with the help of operator. - The logic now handles the condition if the CustomResourceDefinition already exists in the cluster. Signed-off-by: Prateek Singh <[email protected]>
Signed-off-by: Prateek Singh <[email protected]>
Signed-off-by: Prateek Singh <[email protected]>
Signed-off-by: Prateek Singh <[email protected]>
Signed-off-by: Prateek Singh <[email protected]>
Signed-off-by: Prateek Singh <[email protected]>
Signed-off-by: Prateek Singh <[email protected]>
Signed-off-by: Prateek Singh <[email protected]>
@@ -94,9 +95,16 @@ func operatorExecute() { | |||
// Register the CRDs after validating that we are running on a supported | |||
// version of K8s. | |||
if !operatorOption.Config.SkipCRDCreation { | |||
if err := client.RegisterCRDs(k8sAPIExtClient); err != nil { | |||
|
|||
// Register Tracing Policy CRD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's run this logic if and only if tetragonPod.enabled
flag is set to true.
Signed-off-by: Prateek Singh <[email protected]>
Signed-off-by: Prateek Singh <[email protected]>
Signed-off-by: Prateek Singh <[email protected]>
Signed-off-by: Prateek Singh <[email protected]>
|
@prateek041 just an fyi i pushed a copy of this branch to #1281 so that we can test the CI image build 🧑🔬 |
The entire issue requires implementation of Custom Resource that maps pod IP to metadata and eventually remove cilium endpoint custom resource. The issue resolution is to be done in four parts, this is the first step where I Create the Custom resource definition.
Next up, writing the controller logic.