diff --git a/loki/k8s-monitoring/preprocessed.md b/docs/examples/k8s-monitoring-example.md old mode 100755 new mode 100644 similarity index 93% rename from loki/k8s-monitoring/preprocessed.md rename to docs/examples/k8s-monitoring-example.md index 8a9cd09..f6799a6 --- a/loki/k8s-monitoring/preprocessed.md +++ b/docs/examples/k8s-monitoring-example.md @@ -9,9 +9,7 @@ killercoda: backend: imageid: kubernetes-kubeadm-2nodes --- - - # Kubernetes Monitoring with Loki One of the primary use cases for Loki is to collect and store logs from your [Kubernetes cluster](https://kubernetes.io/docs/concepts/overview/). These logs fall into three categories: @@ -32,7 +30,6 @@ Before you begin, here are some things you should know: * **Deployment**: We will deploy Loki, Grafana and Alloy (As part of the Kubernetes Monitoring Helm) in the `meta` namespace of your Kubernetes cluster. Make sure you have the necessary permissions to create resources in this namespace. These pods will also require resources to run so consider the amount of capacity your nodes have available. It also possible to just deploy the Kubernetes monitoring helm (since it has a minimal resource footprint) within your cluster and write logs to an external Loki instance or Grafana Cloud. * **Storage**: In this tutorial, Loki will use the default object storage backend provided in the Loki Helm; [MinIO](https://min.io/docs/minio/kubernetes/upstream/index.html). You should migrate to a more production-ready storage backend like [S3](https://aws.amazon.com/s3/getting-started/), [GCS](https://cloud.google.com/storage/docs), [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/) or a MinIO Cluster for production use cases. - ## Prerequisites Before you begin, you will need the following: @@ -41,18 +38,19 @@ Before you begin, you will need the following: * [kubectl](https://kubernetes.io/docs/tasks/tools/) installed on your local machine. * [helm](https://helm.sh/docs/intro/install/) installed on your local machine. -> **Tip:** -> Alternatively, you can try out this example in our interactive learning environment: [Kubernetes Monitoring with Loki](https://killercoda.com/grafana-labs/course/loki/k8s-monitoring). -> -> It's a fully configured environment with all the dependencies already installed. -> -> ![Interactive](/media/docs/loki/loki-ile.svg) -> -> Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repository](https://github.com/grafana/killercoda). + +{{< admonition type="tip" >}} +Alternatively, you can try out this example in our interactive learning environment: [Kubernetes Monitoring with Loki](https://killercoda.com/grafana-labs/course/loki/k8s-monitoring). + +It's a fully configured environment with all the dependencies already installed. + +![Interactive](/media/docs/loki/loki-ile.svg) + +Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repository](https://github.com/grafana/killercoda). +{{< /admonition >}} - ## Step 1: Create the `meta` and `prod` namespaces @@ -66,9 +64,7 @@ Create the `meta` and `prod` namespaces by running the following commands: ```bash kubectl create namespace meta && kubectl create namespace prod ``` - - ## Step 2: Add the Grafana Helm repository @@ -81,6 +77,8 @@ helm repo add grafana https://grafana.github.io/helm-charts && helm repo update It's recommended to also run `helm repo update` to ensure you have the latest version of the charts. + + ## Step 3: Clone the tutorial repository Clone the tutorial repository by running the following command: @@ -91,10 +89,8 @@ git clone https://github.com/grafana/alloy-scenarios.git && cd alloy-scenarios/k As well as cloning the repository, we have also changed directories to `alloy-scenarios/k8s-logs`. **The rest of this tutorial assumes you are in this directory.** - - - - + + ## Step 4: Deploy Loki Grafana Loki will be used to store our collected logs. In this tutorial we will deploy Loki with a minimal footprint and use the default storage backend provided by the Loki Helm (MinIO). @@ -109,13 +105,15 @@ helm install --values loki-values.yml loki grafana/loki -n meta ``` - ```bash helm install --values killercoda/loki-values.yml loki grafana/loki -n meta ``` + This command will deploy Loki in the `meta` namespace. The command also includes a `values` file that specifies the configuration for Loki. For more details on how to configure the Loki Helm refer to the Loki Helm [documentation](https://grafana.com/docs/loki//setup/install/helm). + + ## Step 5: Deploy Grafana Next we will deploy Grafana to the meta namespace. Grafana will be used to visualize the logs stored in Loki. To deploy Grafana run the following command: @@ -147,10 +145,8 @@ As before the command also includes a `values` file that specifies the configura ``` This configuration defines a data source named `Loki` that Grafana will use to query logs stored in Loki. The `url` attribute specifies the URL of the Loki gateway. The Loki gateway is a service that sits in front of the Loki API and provides a single endpoint for ingesting and querying logs. The URL is in the format `http://loki-gateway.meta.svc.cluster.local:80`. The `loki-gateway` service is created by the Loki Helm chart and is used to query logs stored in Loki. **If you choose to deploy Loki in a different namespace or with a different name, you will need to update the `url` attribute accordingly.** - - - - + + ## Step 6: Deploy the Kubernetes Monitoring Helm The Kubernetes Monitoring Helm chart is used for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack. This includes the ability to collect; metrics, logs, traces & continuous profiling data. The scope of this tutorial is to deploy the Kubernetes Monitoring Helm chart to collect pod logs and Kubernetes events. @@ -217,10 +213,8 @@ To break down the configuration file: * Disable the collection of node logs for the purpose of this tutorial as it requires the mounting of `/var/log/journal`. This is out of scope for this tutorial. * Lastly, define the role of the collector. The Kubernetes Monitoring Helm chart will deploy only what you need and nothing more. In this case, we are telling the Helm chart to only deploy Alloy with the capability to collect logs. If you need to collect K8s metrics, traces, or continuous profiling data, you can enable the respective collectors. - - - - + + ## Step 7: Accessing Grafana To access Grafana, you will need to port-forward the Grafana service to your local machine. To do this, run the following command: @@ -230,8 +224,9 @@ export POD_NAME=$(kubectl get pods --namespace meta -l "app.kubernetes.io/name=g kubectl --namespace meta port-forward $POD_NAME 3000 --address 0.0.0.0 ``` -> **Tip:** -> This will make your terminal unusable until you stop the port-forwarding process. To do this, press `Ctrl + C`. +{{< admonition type="tip" >}} +This will make your terminal unusable until you stop the port-forwarding process. To do this, press `Ctrl + C`. +{{< /admonition >}} This command will port-forward the Grafana service to your local machine on port `3000`. You can access Grafana by navigating to [http://localhost:3000](http://localhost:3000) in your browser. The default credentials are `admin` and `adminadminadmin`. One of the first places you should visit is Explore Logs which will provide a no-code view of the logs being stored in Loki: @@ -239,6 +234,8 @@ This command will port-forward the Grafana service to your local machine on port {{< figure max-width="100%" src="/media/docs/loki/k8s-logs-explore-logs.png" caption="Explore Logs view of K8s logs" alt="Explore Logs view of K8s logs" >}} + + ## Step 8 (Optional): View the Alloy UI The Kubernetes Monitoring Helm chart deploys Grafana Alloy to collect and forward telemetry data from the Kubernetes cluster. The Helm is designed to abstract you from creating an Alloy configuration file. However if you would like to understand the pipeline you can view the Alloy UI. To access the Alloy UI, you will need to port-forward the Alloy service to your local machine. To do this, run the following command: @@ -248,16 +245,16 @@ export POD_NAME=$(kubectl get pods --namespace meta -l "app.kubernetes.io/name=a kubectl --namespace meta port-forward $POD_NAME 12345 --address 0.0.0.0 ``` -> **Tip:** -> This will make your terminal unusable until you stop the port-forwarding process. To do this, press `Ctrl + C`. +{{< admonition type="tip" >}} +This will make your terminal unusable until you stop the port-forwarding process. To do this, press `Ctrl + C`. +{{< /admonition >}} This command will port-forward the Alloy service to your local machine on port `12345`. You can access the Alloy UI by navigating to [http://localhost:12345](http://localhost:12345) in your browser. {{< figure max-width="100%" src="/media/docs/loki/k8s-logs-alloy-ui.png" caption="Grafana Alloy UI" alt="Grafana Alloy UI" >}} - - - + + ## Step 9: Adding a sample application to `prod` Lastly, lets deploy a sample application to the `prod` namespace that will generate some logs. To deploy the sample application run the following command: @@ -279,10 +276,8 @@ and navigate to [http://localhost:3000/a/grafana-lokiexplore-app](http://localho {{< figure max-width="100%" src="/media/docs/loki/k8s-logs-tempo.png" caption="Label view of Tempo logs" alt="Label view of Tempo logs" >}} - - + - ## Conclusion In this tutorial, you learned how to deploy Loki, Grafana, and the Kubernetes Monitoring Helm chart to collect and store logs from a Kubernetes cluster. We have deployed a minimal test version of each of these helm charts to demonstrate how quickly you can get started with Loki. It now worth exploring each of these helm charts in more detail to understand how to scale them to meet your production needs: @@ -291,7 +286,4 @@ In this tutorial, you learned how to deploy Loki, Grafana, and the Kubernetes Mo * [Grafana Helm](https://grafana.com/docs/grafana/latest/installation/helm/) * [Kubernetes Monitoring Helm](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/) - - - - + \ No newline at end of file diff --git a/loki/k8s-monitoring-helm/finish.md b/loki/k8s-monitoring-helm/finish.md new file mode 100644 index 0000000..d8304b6 --- /dev/null +++ b/loki/k8s-monitoring-helm/finish.md @@ -0,0 +1,9 @@ +# Conclusion + +In this tutorial, you learned how to deploy Loki, Grafana, and the Kubernetes Monitoring Helm chart to collect and store logs from a Kubernetes cluster. We have deployed a minimal test version of each of these Helm charts to demonstrate how quickly you can get started with Loki. It is now worth exploring each of these Helm charts in more detail to understand how to scale them to meet your production needs: + +- [Loki Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/) + +- [Grafana Helm chart](https://grafana.com/docs/grafana/latest/installation/helm/) + +- [Kubernetes Monitoring Helm chart](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/) diff --git a/loki/k8s-monitoring/index.json b/loki/k8s-monitoring-helm/index.json similarity index 92% rename from loki/k8s-monitoring/index.json rename to loki/k8s-monitoring-helm/index.json index 3270f14..144835e 100644 --- a/loki/k8s-monitoring/index.json +++ b/loki/k8s-monitoring-helm/index.json @@ -1,5 +1,5 @@ { - "title": "Kubernetes Monitoring with Loki", + "title": "Kubernetes Monitoring Helm", "description": "Learn how to collect and store logs from your Kubernetes cluster using Loki.", "details": { "intro": { diff --git a/loki/k8s-monitoring/intro.md b/loki/k8s-monitoring-helm/intro.md similarity index 51% rename from loki/k8s-monitoring/intro.md rename to loki/k8s-monitoring-helm/intro.md index c9cbac9..db28986 100644 --- a/loki/k8s-monitoring/intro.md +++ b/loki/k8s-monitoring-helm/intro.md @@ -1,8 +1,8 @@ -# Kubernetes Monitoring with Loki +# Kubernetes Monitoring Helm One of the primary use cases for Loki is to collect and store logs from your [Kubernetes cluster](https://kubernetes.io/docs/concepts/overview/). These logs fall into three categories: -1. [**Pod logs**](https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes): Logs generated by containers otherwise known as logs running in your cluster. +1. [**Pod logs**](https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes): Logs generated by pods running in your cluster. 1. [**Kubernetes Events**](https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/): Logs generated by the Kubernetes API server. @@ -16,8 +16,8 @@ In this tutorial, we will deploy [Loki](https://grafana.com/docs/loki/latest/get Before you begin, here are some things you should know: -- **Loki**: Loki can run in a single binary mode or as a distributed system. In this tutorial, we will deploy Loki as a single binary otherwise known as monolithic mode. Loki can be vertically scaled in this mode depending on the amount of logs you are collecting. It is recommended to run Loki in a distributed/microservice mode for production use cases to monitor high volumes of logs. +- **Loki**: Loki can run in a single binary mode or as a distributed system. In this tutorial, we will deploy Loki as a single binary, otherwise known as monolithic mode. Loki can be vertically scaled in this mode depending on the number of logs you are collecting. Grafana Labs recommends running Loki in a distributed/microservice mode for production use cases to monitor high volumes of logs. -- **Deployment**: We will deploy Loki, Grafana and Alloy (As part of the Kubernetes Monitoring Helm) in the `meta`{{copy}} namespace of your Kubernetes cluster. Make sure you have the necessary permissions to create resources in this namespace. These pods will also require resources to run so consider the amount of capacity your nodes have available. It also possible to just deploy the Kubernetes monitoring helm (since it has a minimal resource footprint) within your cluster and write logs to an external Loki instance or Grafana Cloud. +- **Deployment**: You will deploy Loki, Grafana, and Alloy (As part of the Kubernetes Monitoring Helm chart) in the `meta`{{copy}} namespace of your Kubernetes cluster. Make sure you have the necessary permissions to create resources in this namespace. These pods will also require resources to run, so consider the amount of capacity your nodes have available. It also possible to just deploy the Kubernetes monitoring Helm chart (since it has a minimal resource footprint) within your cluster and write logs to an external Loki instance or Grafana Cloud. -- **Storage**: In this tutorial, Loki will use the default object storage backend provided in the Loki Helm; [MinIO](https://min.io/docs/minio/kubernetes/upstream/index.html). You should migrate to a more production-ready storage backend like [S3](https://aws.amazon.com/s3/getting-started/), [GCS](https://cloud.google.com/storage/docs), [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/) or a MinIO Cluster for production use cases. +- **Storage**: In this tutorial, Loki will use the default object storage backend provided in the Loki Helm chart; [MinIO](https://min.io/docs/minio/kubernetes/upstream/index.html). You should migrate to a more production-ready storage backend like [S3](https://aws.amazon.com/s3/getting-started/), [GCS](https://cloud.google.com/storage/docs), [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/) or a MinIO Cluster for production use cases. diff --git a/loki/k8s-monitoring-helm/preprocessed.md b/loki/k8s-monitoring-helm/preprocessed.md new file mode 100755 index 0000000..256a11c --- /dev/null +++ b/loki/k8s-monitoring-helm/preprocessed.md @@ -0,0 +1,311 @@ +--- +title: Kubernetes Monitoring Helm +menuTitle: Kubernetes Monitoring Helm +weight: 300 +description: Learn how to collect and store logs from your Kubernetes cluster using Loki. +killercoda: + title: Kubernetes Monitoring Helm + description: Learn how to collect and store logs from your Kubernetes cluster using Loki. + backend: + imageid: kubernetes-kubeadm-2nodes +--- + + + +# Kubernetes Monitoring Helm + +One of the primary use cases for Loki is to collect and store logs from your [Kubernetes cluster](https://kubernetes.io/docs/concepts/overview/). These logs fall into three categories: + +1. [**Pod logs**](https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes): Logs generated by pods running in your cluster. +2. [**Kubernetes Events**](https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/): Logs generated by the Kubernetes API server. +3. [**Node logs**](https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-node-logging-agent): Logs generated by the nodes in your cluster. + +{{< figure max-width="75%" src="/media/docs/loki/loki-k8s-logs.png" caption="Scraping Kubernetes Logs" alt="Scraping Kubernetes Logs" >}} + +In this tutorial, we will deploy [Loki](https://grafana.com/docs/loki/latest/get-started/overview/) and the [Kubernetes Monitoring Helm chart](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/) to collect two of these log types: Pod logs and Kubernetes events. We will also deploy [Grafana](https://grafana.com/docs/grafana/latest/) to visualize these logs. + +## Things to know + +Before you begin, here are some things you should know: + +* **Loki**: Loki can run in a single binary mode or as a distributed system. In this tutorial, we will deploy Loki as a single binary, otherwise known as monolithic mode. Loki can be vertically scaled in this mode depending on the number of logs you are collecting. Grafana Labs recommends running Loki in a distributed/microservice mode for production use cases to monitor high volumes of logs. +* **Deployment**: You will deploy Loki, Grafana, and Alloy (As part of the Kubernetes Monitoring Helm chart) in the `meta` namespace of your Kubernetes cluster. Make sure you have the necessary permissions to create resources in this namespace. These pods will also require resources to run, so consider the amount of capacity your nodes have available. It also possible to just deploy the Kubernetes monitoring Helm chart (since it has a minimal resource footprint) within your cluster and write logs to an external Loki instance or Grafana Cloud. +* **Storage**: In this tutorial, Loki will use the default object storage backend provided in the Loki Helm chart; [MinIO](https://min.io/docs/minio/kubernetes/upstream/index.html). You should migrate to a more production-ready storage backend like [S3](https://aws.amazon.com/s3/getting-started/), [GCS](https://cloud.google.com/storage/docs), [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/) or a MinIO Cluster for production use cases. + + +## Prerequisites + +Before you begin, you will need the following: + +* A Kubernetes cluster running version `1.23` or later. +* [kubectl](https://kubernetes.io/docs/tasks/tools/) installed on your local machine. +* [Helm](https://helm.sh/docs/intro/install/) installed on your local machine. + +> **Tip:** +> Alternatively, you can try out this example in our interactive learning environment: [Kubernetes Monitoring with Loki](https://killercoda.com/grafana-labs/course/loki/k8s-monitoring-helm). +> +> It's a fully configured environment with all the dependencies already installed. +> +> ![Interactive](/media/docs/loki/loki-ile.svg) +> +> Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repository](https://github.com/grafana/killercoda). + + + + + + +## Create the `meta` and `prod` namespaces + +The K8s Monitoring Helm chart will monitor two namespaces: `meta` and `prod`: +- `meta` namespace: This namespace will be used to deploy Loki, Grafana, and Alloy. +- `prod` namespace: This namespace will be used to deploy the sample application that will generate logs. + +Create the `meta` and `prod` namespaces by running the following command: + +```bash +kubectl create namespace meta && kubectl create namespace prod +``` + + + + + +## Add the Grafana Helm repository + +All three Helm charts (Loki, Grafana, and the Kubernetes Monitoring Helm) are available in the Grafana Helm repository. Add the Grafana Helm repository by running the following command: + +```bash +helm repo add grafana https://grafana.github.io/helm-charts && helm repo update +``` + +As well as adding the repo to our local helm list, we also run `helm repo update` to ensure you have the latest version of the charts. + +## Clone the tutorial repository + +Clone the tutorial repository by running the following command: + +```bash +git clone https://github.com/grafana/alloy-scenarios.git +``` + +Then change directories to the `alloy-scenarios/k8s/logs` directory: + +```bash +cd alloy-scenarios/k8s/logs +``` + +**The rest of this tutorial assumes you are in this directory.** + + + + + +## Deploy Loki + +Grafana Loki will be used to store our collected logs. In this tutorial we will deploy Loki with a minimal footprint and use the default storage backend provided by the Loki Helm chart, MinIO. + +> **Note**: Due to the resource constraints of the Kubernetes cluster running in the playground, we are deploying Loki using a custom values file. This values file reduces the resource requirements of Loki. This turns off features such as cache and Loki Canary, and runs Loki with limited resources. This can take up to **1 minute** to complete. + +To deploy Loki run the following command: + + +```bash +helm install --values loki-values.yml loki grafana/loki -n meta +``` + + + +```bash +helm install --values killercoda/loki-values.yml loki grafana/loki -n meta +``` + +This command will deploy Loki in the `meta` namespace. The command also includes a `values` file that specifies the configuration for Loki. For more details on how to configure the Loki Helm chart refer to the Loki Helm [documentation](https://grafana.com/docs/loki//setup/install/helm). + +## Deploy Grafana + +Next we will deploy Grafana to the `meta` namespace. You will use Grafana to visualize the logs stored in Loki. To deploy Grafana run the following command: + +```bash +helm install --values grafana-values.yml grafana grafana/grafana --namespace meta +``` + +As before the command also includes a `values` file that specifies the configuration for Grafana. There are two important configuration attributes to take note of: + +1. `adminUser` & `adminPassword`: These are the credentials you will use to log in to Grafana. The values are `admin` and `adminadminadmin` respectively. The recommended practice is to either use a Kubernetes secret or allow Grafana to generate a password for you. For more details on how to configure the Grafana Helm chart, refer to the Grafana Helm [documentation](https://grafana.com/docs/grafana/latest/installation/helm/). + +2. `datasources`: This section of the configuration lets you define the data sources that Grafana should use. In this tutorial, you will define a Loki data source. The data source is defined as follows: + + ```yaml + datasources: + datasources.yaml: + apiVersion: 1 + datasources: + - name: Loki + type: loki + access: proxy + orgId: 1 + url: http://loki-gateway.meta.svc.cluster.local:80 + basicAuth: false + isDefault: false + version: 1 + editable: false + ``` + This configuration defines a data source named `Loki` that Grafana will use to query logs stored in Loki. The `url` attribute specifies the URL of the Loki gateway. The Loki gateway is a service that sits in front of the Loki API and provides a single endpoint for ingesting and querying logs. The URL is in the format `http://loki-gateway..svc.cluster.local:80`. The `loki-gateway` service is created by the Loki Helm chart and is used to query logs stored in Loki. **If you choose to deploy Loki in a different namespace or with a different name, you will need to update the `url` attribute accordingly.** + + + + + +## Deploy the Kubernetes Monitoring Helm chart + +The Kubernetes Monitoring Helm chart is used for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana stack. This includes the ability to collect metrics, logs, traces, and continuous profiling data. The scope of this tutorial is to deploy the Kubernetes Monitoring Helm chart to collect pod logs and Kubernetes events. + +To deploy the Kubernetes Monitoring Helm chart run the following command: + +```bash +helm install --values ./k8s-monitoring-values.yml k8s grafana/k8s-monitoring -n meta +``` +Within the configuration file `k8s-monitoring-values.yml` we have defined the following: + +```yaml +--- +cluster: + name: meta-monitoring-tutorial + +destinations: + - name: loki + type: loki + url: http://loki-gateway.meta.svc.cluster.local/loki/api/v1/push + + +clusterEvents: + enabled: true + collector: alloy-logs + namespaces: + - meta + - prod + +nodeLogs: + enabled: false + +podLogs: + enabled: true + gatherMethod: kubernetesApi + collector: alloy-logs + labelsToKeep: ["app_kubernetes_io_name","container","instance","job","level","namespace","service_name","service_namespace","deployment_environment","deployment_environment_name"] + structuredMetadata: + pod: pod # Set structured metadata "pod" from label "pod" + namespaces: + - meta + - prod + +# Collectors +alloy-singleton: + enabled: false + +alloy-metrics: + enabled: false + +alloy-logs: + enabled: true + +alloy-profiles: + enabled: false + +alloy-receiver: + enabled: false +``` + +To break down the configuration file: +* Define the cluster name as `meta-monitoring-tutorial`. This a static label that will be attached to all logs collected by the Kubernetes Monitoring Helm chart. +* Define a destination named `loki` that will be used to forward logs to Loki. The `url` attribute specifies the URL of the Loki gateway. **If you choose to deploy Loki in a different namespace or in a different location entirely, you will need to update the `url` attribute accordingly.** +* Enable the collection of cluster events and pod logs: + * `collector`: specifies which collector to use to collect logs. In this case, we are using the `alloy-logs` collector. + * `labelsToKeep`: specifies the labels to keep when collecting logs. Note this does not drop logs. This is useful when you do not want to apply a high cardanility label. In this case we have removed `pod` from the labels to keep. + * `structuredMetadata`: specifies the structured metadata to collect. In this case, we are setting the structured metadata `pod` so we can retain the pod name for querying. Though it does not need to be indexed as a label.zw + * `namespaces`: specifies the namespaces to collect logs from. In this case, we are collecting logs from the `meta` and `prod` namespaces. +* Disable the collection of node logs for the purpose of this tutorial as it requires the mounting of `/var/log/journal`. This is out of scope for this tutorial. +* Lastly, define the role of the collector. The Kubernetes Monitoring Helm chart will deploy only what you need and nothing more. In this case, we are telling the Helm chart to only deploy Alloy with the capability to collect logs. If you need to collect K8s metrics, traces, or continuous profiling data, you can enable the respective collectors. + + + + + +## Accessing Grafana + +To access Grafana, you will need to port-forward the Grafana service to your local machine. To do this, run the following command: + +```bash +export POD_NAME=$(kubectl get pods --namespace meta -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}") && \ +kubectl --namespace meta port-forward $POD_NAME 3000 --address 0.0.0.0 +``` + +> **Tip:** +> This will make your terminal unusable until you stop the port-forwarding process. To stop the process, press `Ctrl + C`. + +This command will port-forward the Grafana service to your local machine on port `3000`. + +You can now access Grafana by navigating to [http://localhost:3000](http://localhost:3000) in your browser. The default credentials are `admin` and `adminadminadmin`. + +One of the first places you should visit is Explore Logs which lets you automatically visualize and explore your logs without having to write queries: +[http://localhost:3000/a/grafana-lokiexplore-app](http://localhost:3000/a/grafana-lokiexplore-app) + +{{< figure max-width="100%" src="/media/docs/loki/k8s-logs-explore-logs.png" caption="Explore Logs view of K8s logs" alt="Explore Logs view of K8s logs" >}} + +## (Optional): View the Alloy UI + +The Kubernetes Monitoring Helm chart deploys Grafana Alloy to collect and forward telemetry data from the Kubernetes cluster. The Helm is designed to abstract you from creating an Alloy configuration file. However if you would like to understand the pipeline you can view the Alloy UI. To access the Alloy UI, you will need to port-forward the Alloy service to your local machine. To do this, run the following command: + +```bash +export POD_NAME=$(kubectl get pods --namespace meta -l "app.kubernetes.io/name=alloy-logs,app.kubernetes.io/instance=k8s" -o jsonpath="{.items[0].metadata.name}") && \ +kubectl --namespace meta port-forward $POD_NAME 12345 --address 0.0.0.0 +``` + +> **Tip:** +> This will make your terminal unusable until you stop the port-forwarding process. To stop the process, press `Ctrl + C`. + +This command will port-forward the Alloy service to your local machine on port `12345`. You can access the Alloy UI by navigating to [http://localhost:12345](http://localhost:12345) in your browser. + +{{< figure max-width="100%" src="/media/docs/loki/k8s-logs-alloy-ui.png" caption="Grafana Alloy UI" alt="Grafana Alloy UI" >}} + + + + +## Adding a sample application to `prod` + +Finally, lets deploy a sample application to the `prod` namespace that will generate some logs. To deploy the sample application run the following command: + +```bash +helm install tempo grafana/tempo-distributed -n prod +``` + +This will deploy a default version of Grafana Tempo to the `prod` namespace. Tempo is a distributed tracing backend that is used to store and query traces. Normally Tempo would sit alongside Loki and Grafana in the `meta` namespace, but for the purpose of this tutorial, we will pretend this is the primary application generating logs. + +Once deployed lets expose Grafana once more: + +```bash +export POD_NAME=$(kubectl get pods --namespace meta -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}") && \ +kubectl --namespace meta port-forward $POD_NAME 3000 --address 0.0.0.0 +``` + +and navigate to [http://localhost:3000/a/grafana-lokiexplore-app](http://localhost:3000/a/grafana-lokiexplore-app) to view Grafana Tempo logs. + +{{< figure max-width="100%" src="/media/docs/loki/k8s-logs-tempo.png" caption="Label view of Tempo logs" alt="Label view of Tempo logs" >}} + + + + + +## Conclusion + +In this tutorial, you learned how to deploy Loki, Grafana, and the Kubernetes Monitoring Helm chart to collect and store logs from a Kubernetes cluster. We have deployed a minimal test version of each of these Helm charts to demonstrate how quickly you can get started with Loki. It is now worth exploring each of these Helm charts in more detail to understand how to scale them to meet your production needs: + +* [Loki Helm chart](https://grafana.com/docs/loki/latest/setup/install/helm/) +* [Grafana Helm chart](https://grafana.com/docs/grafana/latest/installation/helm/) +* [Kubernetes Monitoring Helm chart](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/) + + + + + diff --git a/loki/k8s-monitoring/step1.md b/loki/k8s-monitoring-helm/step1.md similarity index 82% rename from loki/k8s-monitoring/step1.md rename to loki/k8s-monitoring-helm/step1.md index d4d0ec1..0e08e29 100644 --- a/loki/k8s-monitoring/step1.md +++ b/loki/k8s-monitoring-helm/step1.md @@ -1,4 +1,4 @@ -# Step 1: Create the `meta`{{copy}} and `prod`{{copy}} namespaces +# Create the `meta`{{copy}} and `prod`{{copy}} namespaces The K8s Monitoring Helm chart will monitor two namespaces: `meta`{{copy}} and `prod`{{copy}}: @@ -6,7 +6,7 @@ The K8s Monitoring Helm chart will monitor two namespaces: `meta`{{copy}} and `p - `prod`{{copy}} namespace: This namespace will be used to deploy the sample application that will generate logs. -Create the `meta`{{copy}} and `prod`{{copy}} namespaces by running the following commands: +Create the `meta`{{copy}} and `prod`{{copy}} namespaces by running the following command: ```bash kubectl create namespace meta && kubectl create namespace prod diff --git a/loki/k8s-monitoring-helm/step2.md b/loki/k8s-monitoring-helm/step2.md new file mode 100644 index 0000000..3e93a8e --- /dev/null +++ b/loki/k8s-monitoring-helm/step2.md @@ -0,0 +1,25 @@ +# Add the Grafana Helm repository + +All three Helm charts (Loki, Grafana, and the Kubernetes Monitoring Helm) are available in the Grafana Helm repository. Add the Grafana Helm repository by running the following command: + +```bash +helm repo add grafana https://grafana.github.io/helm-charts && helm repo update +```{{exec}} + +As well as adding the repo to our local helm list, we also run `helm repo update`{{copy}} to ensure you have the latest version of the charts. + +# Clone the tutorial repository + +Clone the tutorial repository by running the following command: + +```bash +git clone https://github.com/grafana/alloy-scenarios.git +```{{exec}} + +Then change directories to the `alloy-scenarios/k8s/logs`{{copy}} directory: + +```bash +cd alloy-scenarios/k8s/logs +```{{exec}} + +**The rest of this tutorial assumes you are in this directory.** diff --git a/loki/k8s-monitoring/step3.md b/loki/k8s-monitoring-helm/step3.md similarity index 60% rename from loki/k8s-monitoring/step3.md rename to loki/k8s-monitoring-helm/step3.md index 0aae3c1..69db630 100644 --- a/loki/k8s-monitoring/step3.md +++ b/loki/k8s-monitoring-helm/step3.md @@ -1,8 +1,8 @@ -# Step 4: Deploy Loki +# Deploy Loki -Grafana Loki will be used to store our collected logs. In this tutorial we will deploy Loki with a minimal footprint and use the default storage backend provided by the Loki Helm (MinIO). +Grafana Loki will be used to store our collected logs. In this tutorial we will deploy Loki with a minimal footprint and use the default storage backend provided by the Loki Helm chart, MinIO. -> **Note**: Due to the resource constraints of the Kubernetes cluster running in the playground, we are deploying Loki using a custom values file. This values file reduces the resource requirements of Loki. This turns off features such as; cache, Loki Canary, and runs Loki with limited resources. This can take up to **1 minute** to complete. +> **Note**: Due to the resource constraints of the Kubernetes cluster running in the playground, we are deploying Loki using a custom values file. This values file reduces the resource requirements of Loki. This turns off features such as cache and Loki Canary, and runs Loki with limited resources. This can take up to **1 minute** to complete. To deploy Loki run the following command: @@ -10,21 +10,21 @@ To deploy Loki run the following command: helm install --values killercoda/loki-values.yml loki grafana/loki -n meta ```{{exec}} -This command will deploy Loki in the `meta`{{copy}} namespace. The command also includes a `values`{{copy}} file that specifies the configuration for Loki. For more details on how to configure the Loki Helm refer to the Loki Helm [documentation](https://grafana.com/docs/loki/latest/setup/install/helm). +This command will deploy Loki in the `meta`{{copy}} namespace. The command also includes a `values`{{copy}} file that specifies the configuration for Loki. For more details on how to configure the Loki Helm chart refer to the Loki Helm [documentation](https://grafana.com/docs/loki/latest/setup/install/helm). -# Step 5: Deploy Grafana +# Deploy Grafana -Next we will deploy Grafana to the meta namespace. Grafana will be used to visualize the logs stored in Loki. To deploy Grafana run the following command: +Next we will deploy Grafana to the `meta`{{copy}} namespace. You will use Grafana to visualize the logs stored in Loki. To deploy Grafana run the following command: ```bash helm install --values grafana-values.yml grafana grafana/grafana --namespace meta ```{{exec}} -As before the command also includes a `values`{{copy}} file that specifies the configuration for Grafana. There are two important configurations attributes to take note of: +As before the command also includes a `values`{{copy}} file that specifies the configuration for Grafana. There are two important configuration attributes to take note of: -1. `adminUser`{{copy}} & `adminPassword`{{copy}}: These are the credentials you will use to log in to Grafana. The values are `admin`{{copy}} and `adminadminadmin`{{copy}} respectively. The recommended practice is to either use a Kubernetes secret or allow Grafana to generate a password for you. For more details on how to configure the Grafana Helm refer to the Grafana Helm [documentation](https://grafana.com/docs/grafana/latest/installation/helm/). +1. `adminUser`{{copy}} & `adminPassword`{{copy}}: These are the credentials you will use to log in to Grafana. The values are `admin`{{copy}} and `adminadminadmin`{{copy}} respectively. The recommended practice is to either use a Kubernetes secret or allow Grafana to generate a password for you. For more details on how to configure the Grafana Helm chart, refer to the Grafana Helm [documentation](https://grafana.com/docs/grafana/latest/installation/helm/). -1. `datasources`{{copy}}: This section of the configuration allows for the definition of data sources that Grafana will use. In this tutorial, we will define a data source for Loki. The data source is defined as follows: +1. `datasources`{{copy}}: This section of the configuration lets you define the data sources that Grafana should use. In this tutorial, you will define a Loki data source. The data source is defined as follows: ```yaml datasources: @@ -42,4 +42,4 @@ As before the command also includes a `values`{{copy}} file that specifies the c editable: false ```{{copy}} -This configuration defines a data source named `Loki`{{copy}} that Grafana will use to query logs stored in Loki. The `url`{{copy}} attribute specifies the URL of the Loki gateway. The Loki gateway is a service that sits in front of the Loki API and provides a single endpoint for ingesting and querying logs. The URL is in the format `http://loki-gateway.meta.svc.cluster.local:80`{{copy}}. The `loki-gateway`{{copy}} service is created by the Loki Helm chart and is used to query logs stored in Loki. **If you choose to deploy Loki in a different namespace or with a different name, you will need to update the `url`{{copy}} attribute accordingly.** +This configuration defines a data source named `Loki`{{copy}} that Grafana will use to query logs stored in Loki. The `url`{{copy}} attribute specifies the URL of the Loki gateway. The Loki gateway is a service that sits in front of the Loki API and provides a single endpoint for ingesting and querying logs. The URL is in the format `http://loki-gateway..svc.cluster.local:80`{{copy}}. The `loki-gateway`{{copy}} service is created by the Loki Helm chart and is used to query logs stored in Loki. **If you choose to deploy Loki in a different namespace or with a different name, you will need to update the `url`{{copy}} attribute accordingly.** diff --git a/loki/k8s-monitoring/step4.md b/loki/k8s-monitoring-helm/step4.md similarity index 66% rename from loki/k8s-monitoring/step4.md rename to loki/k8s-monitoring-helm/step4.md index b133e94..ffd0845 100644 --- a/loki/k8s-monitoring/step4.md +++ b/loki/k8s-monitoring-helm/step4.md @@ -1,6 +1,6 @@ -# Step 6: Deploy the Kubernetes Monitoring Helm +# Deploy the Kubernetes Monitoring Helm chart -The Kubernetes Monitoring Helm chart is used for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack. This includes the ability to collect; metrics, logs, traces & continuous profiling data. The scope of this tutorial is to deploy the Kubernetes Monitoring Helm chart to collect pod logs and Kubernetes events. +The Kubernetes Monitoring Helm chart is used for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana stack. This includes the ability to collect metrics, logs, traces, and continuous profiling data. The scope of this tutorial is to deploy the Kubernetes Monitoring Helm chart to collect pod logs and Kubernetes events. To deploy the Kubernetes Monitoring Helm chart run the following command: @@ -35,6 +35,9 @@ podLogs: enabled: true gatherMethod: kubernetesApi collector: alloy-logs + labelsToKeep: ["app_kubernetes_io_name","container","instance","job","level","namespace","service_name","service_namespace","deployment_environment","deployment_environment_name"] + structuredMetadata: + pod: pod # Set structured metadata "pod" from label "pod" namespaces: - meta - prod @@ -60,11 +63,15 @@ To break down the configuration file: - Define the cluster name as `meta-monitoring-tutorial`{{copy}}. This a static label that will be attached to all logs collected by the Kubernetes Monitoring Helm chart. -- Define a destination named `loki`{{copy}} that will be used to forward logs to Loki. The `url`{{copy}} attribute specifies the URL of the Loki gateway. **If you choose to deploy Loki in a different namespace or in a different location entirley, you will need to update the `url`{{copy}} attribute accordingly.** +- Define a destination named `loki`{{copy}} that will be used to forward logs to Loki. The `url`{{copy}} attribute specifies the URL of the Loki gateway. **If you choose to deploy Loki in a different namespace or in a different location entirely, you will need to update the `url`{{copy}} attribute accordingly.** - Enable the collection of cluster events and pod logs: - `collector`{{copy}}: specifies which collector to use to collect logs. In this case, we are using the `alloy-logs`{{copy}} collector. + - `labelsToKeep`{{copy}}: specifies the labels to keep when collecting logs. Note this does not drop logs. This is useful when you do not want to apply a high cardanility label. In this case we have removed `pod`{{copy}} from the labels to keep. + + - `structuredMetadata`{{copy}}: specifies the structured metadata to collect. In this case, we are setting the structured metadata `pod`{{copy}} so we can retain the pod name for querying. Though it does not need to be indexed as a label.zw + - `namespaces`{{copy}}: specifies the namespaces to collect logs from. In this case, we are collecting logs from the `meta`{{copy}} and `prod`{{copy}} namespaces. - Disable the collection of node logs for the purpose of this tutorial as it requires the mounting of `/var/log/journal`{{copy}}. This is out of scope for this tutorial. diff --git a/loki/k8s-monitoring/step5.md b/loki/k8s-monitoring-helm/step5.md similarity index 76% rename from loki/k8s-monitoring/step5.md rename to loki/k8s-monitoring-helm/step5.md index a1223a1..a9a4053 100644 --- a/loki/k8s-monitoring/step5.md +++ b/loki/k8s-monitoring-helm/step5.md @@ -1,4 +1,4 @@ -# Step 7: Accessing Grafana +# Accessing Grafana To access Grafana, you will need to port-forward the Grafana service to your local machine. To do this, run the following command: @@ -8,15 +8,18 @@ kubectl --namespace meta port-forward $POD_NAME 3000 --address 0.0.0.0 ```{{exec}} > **Tip:** -> This will make your terminal unusable until you stop the port-forwarding process. To do this, press `Ctrl + C`{{copy}}. +> This will make your terminal unusable until you stop the port-forwarding process. To stop the process, press `Ctrl + C`{{copy}}. -This command will port-forward the Grafana service to your local machine on port `3000`{{copy}}. You can access Grafana by navigating to [http://localhost:3000]({{TRAFFIC_HOST1_3000}}) in your browser. The default credentials are `admin`{{copy}} and `adminadminadmin`{{copy}}. One of the first places you should visit is Explore Logs which will provide a no-code view of the logs being stored in Loki: +This command will port-forward the Grafana service to your local machine on port `3000`{{copy}}. +You can now access Grafana by navigating to [http://localhost:3000]({{TRAFFIC_HOST1_3000}}) in your browser. The default credentials are `admin`{{copy}} and `adminadminadmin`{{copy}}. + +One of the first places you should visit is Explore Logs which lets you automatically visualize and explore your logs without having to write queries: [http://localhost:3000/a/grafana-lokiexplore-app]({{TRAFFIC_HOST1_3000}}/a/grafana-lokiexplore-app) ![Explore Logs view of K8s logs](https://grafana.com/media/docs/loki/k8s-logs-explore-logs.png) -# Step 8 (Optional): View the Alloy UI +# (Optional): View the Alloy UI The Kubernetes Monitoring Helm chart deploys Grafana Alloy to collect and forward telemetry data from the Kubernetes cluster. The Helm is designed to abstract you from creating an Alloy configuration file. However if you would like to understand the pipeline you can view the Alloy UI. To access the Alloy UI, you will need to port-forward the Alloy service to your local machine. To do this, run the following command: @@ -26,7 +29,7 @@ kubectl --namespace meta port-forward $POD_NAME 12345 --address 0.0.0.0 ```{{exec}} > **Tip:** -> This will make your terminal unusable until you stop the port-forwarding process. To do this, press `Ctrl + C`{{copy}}. +> This will make your terminal unusable until you stop the port-forwarding process. To stop the process, press `Ctrl + C`{{copy}}. This command will port-forward the Alloy service to your local machine on port `12345`{{copy}}. You can access the Alloy UI by navigating to [http://localhost:12345]({{TRAFFIC_HOST1_12345}}) in your browser. diff --git a/loki/k8s-monitoring/step6.md b/loki/k8s-monitoring-helm/step6.md similarity index 81% rename from loki/k8s-monitoring/step6.md rename to loki/k8s-monitoring-helm/step6.md index 5590a13..b3e973d 100644 --- a/loki/k8s-monitoring/step6.md +++ b/loki/k8s-monitoring-helm/step6.md @@ -1,6 +1,6 @@ -# Step 9: Adding a sample application to `prod`{{copy}} +# Adding a sample application to `prod`{{copy}} -Lastly, lets deploy a sample application to the `prod`{{copy}} namespace that will generate some logs. To deploy the sample application run the following command: +Finally, lets deploy a sample application to the `prod`{{copy}} namespace that will generate some logs. To deploy the sample application run the following command: ```bash helm install tempo grafana/tempo-distributed -n prod diff --git a/loki/k8s-monitoring/finish.md b/loki/k8s-monitoring/finish.md deleted file mode 100644 index a5d7b7a..0000000 --- a/loki/k8s-monitoring/finish.md +++ /dev/null @@ -1,9 +0,0 @@ -# Conclusion - -In this tutorial, you learned how to deploy Loki, Grafana, and the Kubernetes Monitoring Helm chart to collect and store logs from a Kubernetes cluster. We have deployed a minimal test version of each of these helm charts to demonstrate how quickly you can get started with Loki. It now worth exploring each of these helm charts in more detail to understand how to scale them to meet your production needs: - -- [Loki Helm](https://grafana.com/docs/loki/latest/setup/install/helm/) - -- [Grafana Helm](https://grafana.com/docs/grafana/latest/installation/helm/) - -- [Kubernetes Monitoring Helm](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/) diff --git a/loki/k8s-monitoring/step2.md b/loki/k8s-monitoring/step2.md deleted file mode 100644 index d6a7bae..0000000 --- a/loki/k8s-monitoring/step2.md +++ /dev/null @@ -1,19 +0,0 @@ -# Step 2: Add the Grafana Helm repository - -All three helm charts (Loki, Grafana, and the Kubernetes Monitoring Helm) are available in the Grafana Helm repository. Add the Grafana Helm repository by running the following command: - -```bash -helm repo add grafana https://grafana.github.io/helm-charts && helm repo update -```{{exec}} - -It’s recommended to also run `helm repo update`{{copy}} to ensure you have the latest version of the charts. - -# Step 3: Clone the tutorial repository - -Clone the tutorial repository by running the following command: - -```bash -git clone https://github.com/grafana/alloy-scenarios.git && cd alloy-scenarios/k8s-logs -```{{exec}} - -As well as cloning the repository, we have also changed directories to `alloy-scenarios/k8s-logs`{{copy}}. **The rest of this tutorial assumes you are in this directory.** diff --git a/loki/structure.json b/loki/structure.json index 2f1b186..3afe935 100644 --- a/loki/structure.json +++ b/loki/structure.json @@ -12,6 +12,6 @@ { "path": "otel-collector-getting-started", "title": "Getting started with the OpenTelemetry Collector and Loki tutorial"}, { "path": "fluentbit-loki-tutorial", "title": "Sending logs to Loki using Fluent Bit tutorial"}, { "path": "logcli-tutorial", "title": "LogCLI tutorial"}, - { "path": "k8s-monitoring", "title": "Kubernetes Monitoring with Loki"} + { "path": "k8s-monitoring-helm", "title": "Kubernetes Monitoring Helm"} ] } \ No newline at end of file diff --git a/tempo/quick-start/preprocessed.md b/tempo/quick-start/preprocessed.md index fe85994..99fa37a 100755 --- a/tempo/quick-start/preprocessed.md +++ b/tempo/quick-start/preprocessed.md @@ -66,6 +66,11 @@ To learn more, read the [local storage example README](https://github.com/grafan cd tempo/example/docker-compose/local ``` +1. Create a new directory to store data: + ```bash + mkdir tempo-data + ``` + 1. Start the services defined in the docker-compose file: ```bash docker compose up -d @@ -74,8 +79,8 @@ To learn more, read the [local storage example README](https://github.com/grafan 1. Verify that the services are running: ```bash docker compose ps - ``` - + ``` + You should see something like: ```console docker compose ps @@ -92,15 +97,15 @@ To learn more, read the [local storage example README](https://github.com/grafan ## Explore the traces in Grafana -As part of the Docker Compose manifest, Grafana is now accessible on port 3000. +As part of the Docker Compose manifest, Grafana is now accessible on port 3000. You can use Grafana to explore the traces generated by the k6-tracing service. 1. Open a browser and navigate to [http://localhost:3000](http://localhost:3000). 1. Once logged in, navigate to the **Explore** page, select the **Tempo** data source and select the **Search** tab. Select **Run query** to list the recent traces stored in Tempo. Select one to view the trace diagram: - + {{< figure align="center" src="/media/docs/grafana/data-sources/tempo/query-editor/tempo-ds-builder-span-details-v11.png" alt="Use the query builder to explore tracing data in Grafana" >}} - + 1. A couple of minutes after Tempo starts, select the **Service graph** tab for the Tempo data source in the **Explore** page. Select **Run query** to view a service graph, generated by Tempo’s metrics-generator. @@ -110,14 +115,14 @@ You can use Grafana to explore the traces generated by the k6-tracing service. ```bash docker compose down -v ``` - + ## Explore Traces plugin - The [Explore Traces](https://grafana.com/docs/grafana/latest/explore/simplified-exploration/traces/) plugin offers an opinionated non query-based approach to exploring traces. Lets take a look at some of its key features and panels. + The [Explore Traces](https://grafana.com/docs/grafana/latest/explore/simplified-exploration/traces/) plugin offers an opinionated non query-based approach to exploring traces. Lets take a look at some of its key features and panels. 1. Open a browser and navigate to [http://localhost:3000/a/grafana-exploretraces-app](http://localhost:3000/a/grafana-exploretraces-app). 2. Within the filter bar, there is a dropdown menu set to **Rate** of **Full traces**. Change this to **Duration** and **All spans**. @@ -130,11 +135,11 @@ Breakdown of the view: * The histogram at the top shows the distribution of span durations. The lighter the color, the more spans in that duration bucket. In this example, most spans fall within `537ms`, which is considered the average duration for the system. * The high peaks in the histogram indicate spans that are taking longer than the average (As high as `2.15s`). These are likely to be the spans that are causing performance issues. You can investigate further to identify the root cause. -Select `Slow traces` tab in the navigation bar to view the slowest traces in the system. +Select `Slow traces` tab in the navigation bar to view the slowest traces in the system. {{< figure align="center" src="/media/docs/tempo/slow-trace-view.png" alt="Slow traces panel" >}} -`shop-backend` appears to be the primary culprit for the slow traces. This happens when a user initiates the `article-to-cart` operation. From here, you can select the **Trace Name** to open the **Trace View** panel. +`shop-backend` appears to be the primary culprit for the slow traces. This happens when a user initiates the `article-to-cart` operation. From here, you can select the **Trace Name** to open the **Trace View** panel. {{< figure align="center" src="/media/docs/tempo/slow-trace-trace-view.png" alt="Trace View panel" >}} @@ -143,7 +148,7 @@ The **Trace View** panel provides a detailed view of the trace. The panel is div * The middle section shows the trace timeline. Each span is represented as a horizontal bar. The color of the bar represents the span's status. The width of the bar represents the duration of the span. * The bottom section shows the details of the selected span. This includes the span name, duration, and tags. -Drilling into the `shop-backend` span, you can see that the `place-articles` operation has an exception event tied to it. This is likely the root cause of the slow trace. +Drilling into the `shop-backend` span, you can see that the `place-articles` operation has an exception event tied to it. This is likely the root cause of the slow trace. {{< figure align="center" src="/media/docs/tempo/slow-trace-root-cause-2.png" alt="Span View panel" >}} diff --git a/tempo/quick-start/step1.md b/tempo/quick-start/step1.md index b194bae..abe10c8 100644 --- a/tempo/quick-start/step1.md +++ b/tempo/quick-start/step1.md @@ -15,6 +15,12 @@ To learn more, read the [local storage example README](https://github.com/grafan cd tempo/example/docker-compose/local ```{{exec}} +1. Create a new directory to store data: + + ```bash + mkdir tempo-data + ```{{exec}} + 1. Start the services defined in the docker-compose file: ```bash diff --git a/workshops/structure.json b/workshops/structure.json index 889a20f..523dccc 100644 --- a/workshops/structure.json +++ b/workshops/structure.json @@ -2,4 +2,4 @@ "items": [ { "path": "adventure", "title": "Quest World a text based adventure"} ] -} \ No newline at end of file +}