Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[opentelemetry-operator] Update kube-rbac-proxy image to 0.18.1 to remediate vulnerabilities #1397

Merged
merged 2 commits into from
Oct 28, 2024

Conversation

edwintye
Copy link
Contributor

To close #1344 as the earlier PR #1345 didn't go through. Increasing the version to 0.18.1 which is the latest and fixes CVE-2024-28180 and GHSA-xr7q-jx4m-x55m as per their release on top of those fixed in the original proposed version of 0.18.0.

@edwintye edwintye requested review from Allex1 and a team as code owners October 24, 2024 20:57
@jaronoff97
Copy link
Contributor

@edwintye i didn't see it in the changelog, but are you aware of any breaking changes between versions we should be concerned of? Have you tested this locally to ensure this still works as expected?

@edwintye
Copy link
Contributor Author

Have been using the version in our clusters, 1.29/1.30, via overrides for a while and haven't spotted any issues yet. However, I must admit that I have not tested this locally.... so, let me provide an example workflow.

We create a couple of files, the first being the values we use for install the operator named as operator-values.yaml

admissionWebhooks:
  certManager:
    enabled: false
manager:
  collectorImage:
    repository: otel/opentelemetry-collector-k8s
  serviceMonitor:
    enabled: true
    metricsEndpoints:
      - port: https # original is the metrics unprotected endpoint
        scheme: https
        interval: 20s # just to give faster result
        bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
        tlsConfig:
          insecureSkipVerify: true
kubeRBACProxy:
  image:
    repository: quay.io/brancz/kube-rbac-proxy
    tag: v0.18.1

and the second being the collector + target allocator which the operator will create and scrape the metrics off the operator which is named collector-with-ta-prometheus-cr.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: opentelemetry-targetallocator-everything-role
rules:
  - apiGroups:
      - monitoring.coreos.com
    resources:
      - servicemonitors
      - podmonitors
    verbs:
      - '*'
  - apiGroups: [""]
    resources:
      - namespaces
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources:
      - nodes
      - nodes/metrics
      - services
      - endpoints
      - pods
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources:
      - configmaps
    verbs: ["get"]
  - apiGroups:
      - discovery.k8s.io
    resources:
      - endpointslices
    verbs: ["get", "list", "watch"]
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs: ["get", "list", "watch"]
  - nonResourceURLs: ["/metrics"]
    verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: opentelemetry-targetallocator-everything-role
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: opentelemetry-targetallocator-everything-role
subjects:
  - kind: ServiceAccount
    name: opentelemetry-targetallocator-sa
    namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: opentelemetry-targetallocator-sa
---
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: collector-with-ta-prometheus-cr
spec:
  mode: statefulset
  serviceAccount: opentelemetry-targetallocator-sa
  targetAllocator:
    enabled: true
    serviceAccount: opentelemetry-targetallocator-sa
    prometheusCR:
      enabled: true
      serviceMonitorSelector: { }
      podMonitorSelector: { }
  config:
    receivers:
      prometheus:
        config:
          scrape_configs: []

    exporters:
      debug:
        verbosity: detailed

    service:
      pipelines:
        metrics:
          receivers: [prometheus]
          exporters: [debug]

Then we: spin up a kind cluster, install the CRDs, install the otel operator, apply the CRO to create a collector and corresponding TA, check for successful scrapes.

# fast create cluster
kind create cluster
# need both crd for the target allocator
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.77.2/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.77.2/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
# install the opeartor
helm upgrade --install opentelemetry-operator open-telemetry/opentelemetry-operator -f operator-values.yaml
# WAIT, we need the operator to spin up first
# use the operator to create a collector + TA to monitor the operator
kubectl apply --server-side -f collector-with-ta-prometheus-cr.yaml
# wait a bit then we can tail the logs which outputs the scraped metrics
kubectl logs -f collector-with-ta-prometheus-cr-collector-0

This is probably the shortest variant that I can come up with for now, which has some resemblance to how people scrape metrics via the proxy. To show an unsuccessful/failed scrapes, there are a couple options that is relatively easy to do: remove the bearerTokenFile in the service monitor, remove the non resource url permission from the collector service account. If people use the proxy in some other way I am happy to test it out as well.

@edwintye
Copy link
Contributor Author

edwintye commented Oct 28, 2024

Sorry about pushing another commit to an approved PR. I double checked earlier today and realize that the argument --logtostderr=true no longer has effect from version 0.16.0 onwards. I adjusted the deployment to reflect that.

Screenshot 2024-10-28 at 10 14 58

@jaronoff97 jaronoff97 merged commit 4a60374 into open-telemetry:main Oct 28, 2024
4 checks passed
@jaronoff97
Copy link
Contributor

Thank you for your contribution!

@lreed-mdsol
Copy link

Thanks!! @edwintye et al!!
Now if we can get them to accept this in SumoLogic/sumologic-kubernetes-collection#3862

codeboten pushed a commit to codeboten/opentelemetry-helm-charts that referenced this pull request Nov 14, 2024
…mediate vulnerabilities (open-telemetry#1397)

* [opentelemetry-operator] Update kube-rbac-proxy image from 0.15.0 to 0.18.1

Signed-off-by: Edwin Tye <[email protected]>

* [opentelemetry-operator] remove argument logtostderr as it no longer has effect since 0.16.0

Signed-off-by: Edwin Tye <[email protected]>

---------

Signed-off-by: Edwin Tye <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bump kube-rbac-proxy for opentelemetry-operator to fix CVE
5 participants