Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to setup prometheus exporter in open-telemetry collector? #37459

Open
naqikazmi97 opened this issue Jan 24, 2025 · 5 comments
Open

How to setup prometheus exporter in open-telemetry collector? #37459

naqikazmi97 opened this issue Jan 24, 2025 · 5 comments
Labels
exporter/prometheus question Further information is requested

Comments

@naqikazmi97
Copy link

Component(s)

exporter/prometheus

Describe the issue you're reporting

I'm trying to setup prometheus exporter in open-telemetry collector and want to send metrics from open-telemetry collector to prometheus so that it can be viewed on Grafana. I'm doing all that with flux but I'm failing to configure prometheus exporter.

My open-telemetry collector configutaion is:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: opentelemetry-collector
  namespace: monitoring
spec:
  releaseName: opentelemetry-collector
  maxHistory: 3
  interval: 1m0s
  suspend: false
  chart:
    spec:
      chart: opentelemetry-collector
      version: "0.111.1"
      sourceRef:
        kind: HelmRepository
        name: open-telemetry-charts
        namespace: flux-system
  values:
    image:
      repository: otel/opentelemetry-collector-contrib
      tag: latest
    mode: deployment
    extraEnvs:
    - name: MY_POD_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
    - name: K8S_NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    presets:
      hostMetrics:
        enabled: true
      kubeletMetrics:
        enabled: true
      kubernetesAttributes:
        enabled: true
      kubernetesEvents:
        enabled: true
    clusterRole:
      create: true
      name: "opentelemetry-collector-admin"
      rules:
      - verbs: ["*"]
        resources: ["*"]
        apiGroups: ["*"]
      - verbs: ["*"]
        nonResourceURLs: ["*"]
      clusterRoleBinding:
        name: "opentelemetry-collector-admin"
    serviceAccount:
      create: true
      name: "opentelemetry-collector-admin"
    config:
      receivers:
        kubeletstats:
          collection_interval: 10s
          auth_type: "serviceAccount"
          endpoint: https://${env:K8S_NODE_NAME}:10250
          insecure_skip_verify: true
          metric_groups:
            - container
            - pod
            - volume
            - node      
          extra_metadata_labels:
            - container.id
        k8s_cluster:
          collection_interval: 10s
          node_conditions_to_report: [Ready, MemoryPressure,DiskPressure,NetworkUnavailable]
          allocatable_types_to_report: [cpu, memory, storage, ephemeral-storage]
        k8s_events:
          auth_type : "serviceAccount"   
        otlp:
          protocols:
            grpc:
              endpoint: ${env:MY_POD_IP}:4317
            http:
              endpoint: ${env:MY_POD_IP}:4318
        prometheus:
          config:
            scrape_configs:
            - job_name: opentelemetry-collector
              scrape_interval: 10s
              static_configs:
              - targets:
                - ${env:MY_POD_IP}:8888
      exporters:
        debug:
          verbosity: detailed
        prometheus:
          endpoint: "kube-prometheus-stack-prometheus.monitoring.svc.cluster.local:9090"
      processors:
        batch: {}
        k8sattributes:
          extract:
            metadata:
            - k8s.namespace.name
            - k8s.deployment.name
            - k8s.statefulset.name
            - k8s.daemonset.name
            - k8s.cronjob.name
            - k8s.job.name
            - k8s.node.name
            - k8s.pod.name
            - k8s.pod.uid
            - k8s.pod.start_time
          passthrough: false
          pod_association:
          - sources:
            - from: resource_attribute
              name: k8s.pod.ip
          - sources:
            - from: resource_attribute
              name: k8s.pod.uid
          - sources:
            - from: connection
        memory_limiter:
          check_interval: 5s
          limit_percentage: 80
          spike_limit_percentage: 25
      service:
        telemetry: 
          logs: 
            level: "debug" 
        pipelines:
          metrics:
            receivers:
              - otlp
              # - prometheus
              - k8s_cluster
              - kubeletstats
            processors:
              - batch
              - k8sattributes
              - memory_limiter
            exporters:
              - debug
              - prometheus
    ports:
      metrics:
        enabled: true
        containerPort: 8888
        servicePort: 8888
        protocol: TCP
      otlp-grpc:
        enabled: true
        containerPort: 4317
        servicePort: 4317
        protocol: TCP
      otlp-http:
        enabled: true
        containerPort: 4318
        servicePort: 4318
        protocol: TCP

    ingress:
      enabled: false

    serviceMonitor:
      enabled: true
      extraLabels:
        release: kube-prometheus-stack

I get this error:

collector server run finished with error: cannot start pipelines: listen tcp:9091: bind: cannot assign requested address; failed to shutdown pipelines: no existing monitoring routine is running; no existing monitoring routine is running; no existing monitoring routine is running opentlemetery collector prometheus exporter.

Any idea what I can do.

@naqikazmi97 naqikazmi97 added the needs triage New item requiring triage label Jan 24, 2025
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@JaredTan95
Copy link
Member

endpoint of prometheus exporter is the address on which metrics will be exposed.

@JaredTan95 JaredTan95 added question Further information is requested and removed needs triage New item requiring triage labels Jan 24, 2025
@naqikazmi97
Copy link
Author

naqikazmi97 commented Jan 24, 2025

I'm a lil bit confused here. I'm using open-telemetry collector helm and Kube-prometheus-stack helm. What I wanna do this get all the metrics from cluster, nodes and pods and view them on grafana so for that should I use prometheus exporter or prometheusremotewrite exporter? How to configure the exporter? If anybody can show any example that will be helpful

@naqikazmi97
Copy link
Author

endpoint of prometheus exporter is the address on which metrics will be exposed.

So this should be the address of open-telemetry collector?

@ArthurSens
Copy link
Member

ArthurSens commented Jan 24, 2025

Kube-Prometheus-stack should, alone, be able to collect metrics from your cluster. You don't need the collector at all.

If you want to use a collector anyway, you have two models available: pull or push models.

  • Prometheusexporter uses pull model, which means that you'll expose metrics in an endpoint and Prometheus is the active actor here and scrape the metrics from the collector.
  • PrometheusRemoteWriteExporter uses the push model, and the collector is the active actor and will send metrics to your Prometheus.

If you use PrometheusExporter, the endpoint in your configuration is the endpoint you want to expose metrics

exporters:
        prometheus:
          endpoint: localhost:8080

Then in your prometheus you'll need to configure scrape configuration to scrape your collector on port 8080

If you use PrometheusRemoteWriteExporter, the endpoint in your configuration is the endpoint of your Prometheus that you're sending metrics to

exporters:
        prometheusremotewrite:
          endpoint: kube-prometheus-stack-prometheus.monitoring.svc.cluster.local:9090/api/v1/write

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exporter/prometheus question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants