Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ServiceMonitor in helm chart #97

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

enrico9034
Copy link

To ensure you can create the service monitor without having to deploy it manually after installation, I added a values metrics.serviceMonitor.enabled. The default values is false, but if set to true it creates the service monitor according to the documentation.

Copy link
Member

@innobead innobead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This depends on https://longhorn.io/docs/1.4.0/monitoring/prometheus-and-grafana-setup/#install-prometheus-operator, but we don't actually enforce users to use this stack. Don't feel to make this part of the volume good for now until we have an opininated builtin integration for monitoring.

Copy link
Member

@innobead innobead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metrics.serviceMonitor.enable seems specific for service monitor, so it should is acceptable.

@innobead
Copy link
Member

cc @longhorn/qa Can you help with the testing?

@yangchiu
Copy link
Member

Verified if metrics.serviceMonitor.enabled set to true, the corresponding service monitor can be created:

$ kubectl get ServiceMonitor -n longhorn-system
NAME                                 AGE
longhorn-prometheus-servicemonitor   2m16s

$ kubectl get ServiceMonitor longhorn-prometheus-servicemonitor -n longhorn-system -o yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  annotations:
    meta.helm.sh/release-name: longhorn
    meta.helm.sh/release-namespace: longhorn-system
  creationTimestamp: "2023-03-14T03:50:37Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: longhorn
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: longhorn
    app.kubernetes.io/version: v1.4.0
    helm.sh/chart: longhorn-1.4.0
    name: longhorn-prometheus-servicemonitor
  name: longhorn-prometheus-servicemonitor
  namespace: longhorn-system
  resourceVersion: "18755"
  uid: 9b5b67fa-264f-4a9c-aecf-b82612cf253b
spec:
  endpoints:
  - port: manager
  namespaceSelector:
    matchNames:
    - longhorn-system
  selector:
    matchLabels:
      app: longhorn-manager

@enrico9034
Copy link
Author

Hi, any update on this? Can you merge it?

@Flou21
Copy link

Flou21 commented Jun 3, 2023

Hey,

I was about to deploy Longhorn in a new cluster and set up monitoring for it.
So exactly what is improved here, would be cool if this could be merged

@innobead
Copy link
Member

innobead commented Jun 3, 2023

See if we can make this part of the following release.

Copy link

@PhanLe1010 PhanLe1010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@PhanLe1010
Copy link

PhanLe1010 commented Jun 6, 2023

@yangchiu could you please verify that Prometheus is able to fetch the metrics using the serviceMonitor? Instruction is at https://longhorn.io/docs/1.4.2/monitoring/integrating-with-rancher-monitoring/

Verified if metrics.serviceMonitor.enabled set to true, the corresponding service monitor can be created:

$ kubectl get ServiceMonitor -n longhorn-system
NAME                                 AGE
longhorn-prometheus-servicemonitor   2m16s

$ kubectl get ServiceMonitor longhorn-prometheus-servicemonitor -n longhorn-system -o yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  annotations:
    meta.helm.sh/release-name: longhorn
    meta.helm.sh/release-namespace: longhorn-system
  creationTimestamp: "2023-03-14T03:50:37Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: longhorn
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: longhorn
    app.kubernetes.io/version: v1.4.0
    helm.sh/chart: longhorn-1.4.0
    name: longhorn-prometheus-servicemonitor
  name: longhorn-prometheus-servicemonitor
  namespace: longhorn-system
  resourceVersion: "18755"
  uid: 9b5b67fa-264f-4a9c-aecf-b82612cf253b
spec:
  endpoints:
  - port: manager
  namespaceSelector:
    matchNames:
    - longhorn-system
  selector:
    matchLabels:
      app: longhorn-manager

Copy link

@PhanLe1010 PhanLe1010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh wait. This is not the correct repo. Please create PR into https://github.com/longhorn/longhorn/tree/master/chart instead @enrico9034

@enrico9034
Copy link
Author

To ensure you can create the service monitor without having to deploy it manually after installation, I added a values metrics.serviceMonitor.enabled. The default values is false, but if set to true it creates the service monitor according to the documentation.

Done this is the PR longhorn/longhorn#6157

@98jan
Copy link

98jan commented Feb 11, 2024

would this change still work when NetworkPolicies are enabled?
I tried today to scrape the metrics with the enabled ServiceMonitor: https://github.com/longhorn/longhorn/blob/3e04f753f2449ef9d91b623b4b4263c6c843bd9b/chart/values.yaml#L464

But the connection is failing, as the available NetworkPolicies would block this request.

@avk999
Copy link

avk999 commented Jun 5, 2024

Many installations require specific labels/annotations/namespace for serviceMonitor and other prometheus stack objects. Would be great to have those configurable like:

metrics:
   serviceMonitor:
     enabled: true
     additional_labels: {}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants