Skip to content

Commit

Permalink
Edit to update Alloy content (#103)
Browse files Browse the repository at this point in the history
* Fix some links and remove tabels comparing versions

* Clean up refs to flow and regenerate versioned files

* Fixing links spelling and typos

* Fix cross ref links

* Fix broken xref links

* Fix more broken xrefs and minor style fixes

* Fix paths for Windows

* Fix broken links in migrate topics
  • Loading branch information
clayton-cornell authored Apr 2, 2024
1 parent 55184f1 commit bdf2636
Show file tree
Hide file tree
Showing 22 changed files with 109 additions and 118 deletions.
16 changes: 8 additions & 8 deletions docs/sources/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ cascade:
{{< param "FULL_PRODUCT_NAME" >}} is a vendor-neutral distribution of the [OpenTelemetry][] (OTel) Collector.
{{< param "PRODUCT_NAME" >}} uniquely combines the very best OSS observability signals in the community.
It offers native pipelines for OTel, [Prometheus][], [Pyroscope][], [Loki][], and many other metrics, logs, traces, and profile tools.
In additon, you can also use {{< param "PRODUCT_NAME" >}} pipelines to do other tasks such as configure alert rules in Loki and Mimir.
In additon, you can use {{< param "PRODUCT_NAME" >}} pipelines to do other tasks such as configure alert rules in Loki and Mimir.
{{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and Promtail.
You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combined into a hybrid system of multiple collectors and agents.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and you can pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
Expand All @@ -27,7 +27,7 @@ You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructu
Some of these features include:

* **Custom components:** You can use {{< param "PRODUCT_NAME" >}} to create and share custom components.
Custom components combine a pipeline of existing components into a single, easy-to-understand component that is just a few lines long.
Custom components combine a pipeline of existing components into a single, easy-to-understand component that's just a few lines long.
You can use pre-built custom components from the community, ones packaged by Grafana, or create your own.
* **GitOps compatibility:** {{< param "PRODUCT_NAME" >}} uses frameworks to pull configurations from Git, S3, HTTP endpoints, and just about any other source.
* **Clustering support:** {{< param "PRODUCT_NAME" >}} has native clustering support.
Expand All @@ -37,10 +37,10 @@ Some of these features include:
* **Debugging utilities:** {{< param "PRODUCT_NAME" >}} provides troubleshooting support and an embedded [user interface][UI] to help you identify and resolve configuration problems.

[OpenTelemetry]: https://opentelemetry.io/ecosystem/distributions/
[Prometheus]: https://prometheus.io
[Loki]: https://github.com/grafana/loki
[Grafana]: https://github.com/grafana/grafana
[Tempo]: https://github.com/grafana/tempo
[Mimir]: https://github.com/grafana/mimir
[Pyroscope]: https://github.com/grafana/pyroscope
[Prometheus]: https://prometheus.io/
[Loki]: https://grafana.com/docs/loki/
[Grafana]: https://grafana.com/docs/grafana/
[Tempo]: https://grafana.com/docs/tempo/
[Mimir]: https://grafana.com/docs/mimir/
[Pyroscope]: https://grafana.com/docs/pyroscope/
[UI]: ./tasks/debug/#alloy-ui
16 changes: 8 additions & 8 deletions docs/sources/_index.md.t
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ cascade:
{{< param "FULL_PRODUCT_NAME" >}} is a vendor-neutral distribution of the [OpenTelemetry][] (OTel) Collector.
{{< param "PRODUCT_NAME" >}} uniquely combines the very best OSS observability signals in the community.
It offers native pipelines for OTel, [Prometheus][], [Pyroscope][], [Loki][], and many other metrics, logs, traces, and profile tools.
In additon, you can also use {{< param "PRODUCT_NAME" >}} pipelines to do other tasks such as configure alert rules in Loki and Mimir.
In additon, you can use {{< param "PRODUCT_NAME" >}} pipelines to do other tasks such as configure alert rules in Loki and Mimir.
{{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and Promtail.
You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combined into a hybrid system of multiple collectors and agents.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and you can pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
Expand All @@ -27,7 +27,7 @@ You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructu
Some of these features include:

* **Custom components:** You can use {{< param "PRODUCT_NAME" >}} to create and share custom components.
Custom components combine a pipeline of existing components into a single, easy-to-understand component that is just a few lines long.
Custom components combine a pipeline of existing components into a single, easy-to-understand component that's just a few lines long.
You can use pre-built custom components from the community, ones packaged by Grafana, or create your own.
* **GitOps compatibility:** {{< param "PRODUCT_NAME" >}} uses frameworks to pull configurations from Git, S3, HTTP endpoints, and just about any other source.
* **Clustering support:** {{< param "PRODUCT_NAME" >}} has native clustering support.
Expand All @@ -37,10 +37,10 @@ Some of these features include:
* **Debugging utilities:** {{< param "PRODUCT_NAME" >}} provides troubleshooting support and an embedded [user interface][UI] to help you identify and resolve configuration problems.
[OpenTelemetry]: https://opentelemetry.io/ecosystem/distributions/
[Prometheus]: https://prometheus.io
[Loki]: https://github.com/grafana/loki
[Grafana]: https://github.com/grafana/grafana
[Tempo]: https://github.com/grafana/tempo
[Mimir]: https://github.com/grafana/mimir
[Pyroscope]: https://github.com/grafana/pyroscope
[Prometheus]: https://prometheus.io/
[Loki]: https://grafana.com/docs/loki/
[Grafana]: https://grafana.com/docs/grafana/
[Tempo]: https://grafana.com/docs/tempo/
[Mimir]: https://grafana.com/docs/mimir/
[Pyroscope]: https://grafana.com/docs/pyroscope/
[UI]: ./tasks/debug/#alloy-ui
8 changes: 4 additions & 4 deletions docs/sources/concepts/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ weight: 500

# Clustering

Clustering enables a fleet of {{< param "PRODUCT_NAME" >}}s to work together for workload distribution and high availability.
Clustering enables a fleet of {{< param "PRODUCT_NAME" >}} deployments to work together for workload distribution and high availability.
It helps create horizontally scalable deployments with minimal resource and operational overhead.

To achieve this, {{< param "PRODUCT_NAME" >}} makes use of an eventually consistent model that assumes all participating
{{< param "PRODUCT_NAME" >}}s are interchangeable and converge on using the same configuration file.
{{< param "PRODUCT_NAME" >}} deployments are interchangeable and converge on using the same configuration file.

The behavior of a standalone, non-clustered {{< param "PRODUCT_NAME" >}} is the same as if it were a single-node cluster.

Expand All @@ -24,7 +24,7 @@ You configure clustering by passing `cluster` command-line flags to the [run][]

Target auto-distribution is the most basic use case of clustering.
It allows scraping components running on all peers to distribute the scrape load between themselves.
Target auto-distribution requires that all {{< param "PRODUCT_NAME" >}} in the same cluster can reach the same service discovery APIs and scrape the same targets.
Target auto-distribution requires that all {{< param "PRODUCT_NAME" >}} deployments in the same cluster can reach the same service discovery APIs and scrape the same targets.

You must explicitly enable target auto-distribution on components by defining a `clustering` block.

Expand All @@ -41,7 +41,7 @@ prometheus.scrape "default" {
A cluster state change is detected when a new node joins or an existing node leaves.
All participating components locally recalculate target ownership and re-balance the number of targets they’re scraping without explicitly communicating ownership over the network.

Target auto-distribution allows you to dynamically scale the number of {{< param "PRODUCT_NAME" >}}s to distribute workload during peaks.
Target auto-distribution allows you to dynamically scale the number of {{< param "PRODUCT_NAME" >}} deployments to distribute workload during peaks.
It also provides resiliency because targets are automatically picked up by one of the node peers if a node leaves.

{{< param "PRODUCT_NAME" >}} uses a local consistent hashing algorithm to distribute targets, meaning that, on average, only ~1/N of the targets are redistributed.
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/concepts/components.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ Each component handles a single task, such as retrieving secrets or collecting P

Components are composed of the following:

* Arguments: Settings that configure a component.
* Exports: Named values that a component exposes to other components.
* **Arguments:** Settings that configure a component.
* **Exports:** Named values that a component exposes to other components.

Each component has a name that describes what that component is responsible for.
For example, the `local.file` component is responsible for retrieving the contents of files on disk.
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/concepts/configuration-syntax/files.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ weight: 100

# Files

{{< param "PRODUCT_NAME" >}} configuration files are plain text files with the `.alloy` file extension.
{{< param "PRODUCT_NAME" >}} configuration files are plain text files with a `.alloy` file extension.
You can refer to each {{< param "PRODUCT_NAME" >}} file as a "configuration file" or an "{{< param "PRODUCT_NAME" >}} configuration."

{{< param "PRODUCT_NAME" >}} configuration files must be UTF-8 encoded and can contain Unicode characters.
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/get-started/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ To decide whether scaling is necessary, check metrics such as:
#### Stateful and stateless components

In the context of tracing, a "stateful component" is a component that needs to aggregate certain spans to work correctly.
A "stateless {{< param "PRODUCT_NAME" >}}" is an {{< param "PRODUCT_NAME" >}} which doesn't contain stateful components.
A "stateless {{< param "PRODUCT_NAME" >}}" is an {{< param "PRODUCT_NAME" >}} instance which doesn't contain stateful components.

Scaling stateful {{< param "PRODUCT_NAME" >}} instances is more difficult, because spans must be forwarded to a specific {{< param "PRODUCT_NAME" >}} instance according to a span property such as trace ID or a `service.name` attribute.
You can forward spans with `otelcol.exporter.loadbalancing`.
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/get-started/install/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ Installing {{< param "PRODUCT_NAME" >}} on other operating systems is possible,
By default, {{< param "PRODUCT_NAME" >}} sends anonymous usage information to Grafana Labs.
Refer to [data collection][] for more information about what data is collected and how you can opt-out.

[data collection]: "../../../data-collection/
[data collection]: "../../../../data-collection/
2 changes: 1 addition & 1 deletion docs/sources/get-started/install/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ You have successfully deployed {{< param "PRODUCT_NAME" >}} on Kubernetes, using

- [Configure {{< param "PRODUCT_NAME" >}}][Configure]

- Refer to the [{{< param "PRODUCT_NAME" >}} Helm chart documentation on Artifact Hub][Artifact Hub] for more information about the Helm chart.
<!-- - Refer to the [{{< param "PRODUCT_NAME" >}} Helm chart documentation on Artifact Hub][Artifact Hub] for more information about the Helm chart. -->

[Helm]: https://helm.sh
[Artifact Hub]: https://artifacthub.io/packages/helm/grafana/alloy
Expand Down
6 changes: 4 additions & 2 deletions docs/sources/introduction/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ It's fully compatible with the most popular open source observability standards
Some of the key features of {{< param "PRODUCT_NAME" >}} include:

* **Custom components:** You can use {{< param "PRODUCT_NAME" >}} to create and share custom components.
Custom components combine a pipeline of existing components into a single, easy-to-understand component that is just a few lines long.
Custom components combine a pipeline of existing components into a single, easy-to-understand component that's just a few lines long.
You can use pre-built custom components from the community, ones packaged by Grafana, or create your own.
* **Reusable components:** You can use the output of a component as the input for multiple other components.
* **Chained components:** You can chain components together to form a pipeline.
Expand All @@ -30,6 +30,7 @@ Some of the key features of {{< param "PRODUCT_NAME" >}} include:
* **Security:** {{< param "PRODUCT_NAME" >}} helps you manage authentication credentials and connect to HashiCorp Vaults or Kubernetes clusters to retrieve secrets.
* **Debugging utilities:** {{< param "PRODUCT_NAME" >}} provides troubleshooting support and an embedded [user interface][UI] to help you identify and resolve configuration problems.

<!--
### Compare {{% param "PRODUCT_NAME" %}} with OpenTelemetry and Prometheus
The following tables compare some of the features of {{< param "PRODUCT_NAME" >}} with OpenTelemetry and Prometheus.
Expand Down Expand Up @@ -60,7 +61,7 @@ The following tables compare some of the features of {{< param "PRODUCT_NAME" >}
| **Cloud integrations** | Some | No | No |
| **Kubernetes monitoring** | [Yes][helm chart] | No | Yes, custom |
| **Application observability** | [Yes][observability] | Yes | No |

<!--
<!--
### BoringCrypto
Expand All @@ -78,6 +79,7 @@ binaries and images with BoringCrypto enabled. Builds and Docker images for Linu
* Consult the [Tasks][] instructions to accomplish common objectives with {{< param "PRODUCT_NAME" >}}.
* Check out the [Reference][] documentation to find specific information you might be looking for.

[OpenTelemetry]: https://opentelemetry.io/ecosystem/distributions/
[Install]: ../get-started/install/
[Concepts]: ../concepts/
[Tasks]: ../tasks/
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/reference/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: Grafana Alloy Reference
weight: 600
---

# {{% param "PRODUCT_NAME" %}} Reference
# {{% param "FULL_PRODUCT_NAME" %}} Reference

This section provides reference-level documentation for the various parts of {{< param "PRODUCT_NAME" >}}:

Expand Down
12 changes: 6 additions & 6 deletions docs/sources/shared/deploy-alloy.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ title: Deploy Grafana Alloy
{{< param "PRODUCT_NAME" >}} is a flexible, vendor-neutral telemetry collector.
This flexibility means that {{< param "PRODUCT_NAME" >}} doesn’t enforce a specific deployment topology but can work in multiple scenarios.

This page lists common topologies used for deployments of {{% param "PRODUCT_NAME" %}}, when to consider using each topology, issues you may run into, and scaling considerations.
This page lists common topologies used for {{% param "PRODUCT_NAME" %}} deployments, when to consider using each topology, issues you may run into, and scaling considerations.

## As a centralized collection service

Expand All @@ -20,7 +20,7 @@ This topology allows you to use a smaller number of collectors to coordinate ser
![centralized-collection](/media/docs/agent/agent-topologies/centralized-collection.png)

Using this topology requires deploying {{< param "PRODUCT_NAME" >}} on separate infrastructure, and making sure that they can discover and reach these applications over the network.
The main predictor for the size of {{< param "PRODUCT_NAME" >}} is the number of active metrics series it's scraping. A rule of thumb is approximately 10 KB of memory for each series.
The main predictor for the size of an {{< param "PRODUCT_NAME" >}} deployment is the number of active metrics series it's scraping. A rule of thumb is approximately 10 KB of memory for each series.
We recommend you start looking towards horizontal scaling around the 1 million active series mark.

### Using Kubernetes StatefulSets
Expand Down Expand Up @@ -50,11 +50,11 @@ You can also use a Kubernetes Deployment in cases where persistent storage isn't

## As a host daemon

Deploying one {{< param "PRODUCT_NAME" >}} per machine is required for collecting machine-level metrics and logs, such as node_exporter hardware and network metrics or journald system logs.
Deploying one {{< param "PRODUCT_NAME" >}} instance per machine is required for collecting machine-level metrics and logs, such as node_exporter hardware and network metrics or journald system logs.

![daemonset](/media/docs/agent/agent-topologies/daemonset.png)

Each {{< param "PRODUCT_NAME" >}} requires you to open an outgoing connection for each remote endpoint it’s shipping data to.
Each {{< param "PRODUCT_NAME" >}} instance requires you to open an outgoing connection for each remote endpoint it’s shipping data to.
This can lead to NAT port exhaustion on the egress infrastructure.
Each egress IP can support up to (65535 - 1024 = 64511) outgoing connections on different ports.
So, if all {{< param "PRODUCT_NAME" >}}s are shipping metrics and log data, an egress IP can support up to 32,255 collectors.
Expand Down Expand Up @@ -104,7 +104,7 @@ The Pod’s controller, network configuration, enabled capabilities, and availab

* Doesn’t scale separately
* Makes resource consumption harder to monitor and predict
* {{< param "PRODUCT_NAME" >}}s don't have a life cycle of their own, making it harder to reason about things like recovering from network outages
* Each {{< param "PRODUCT_NAME" >}} instance doesn't have a life cycle of its own, making it harder to do things like recovering from network outages

### Use for

Expand All @@ -115,7 +115,7 @@ The Pod’s controller, network configuration, enabled capabilities, and availab
### Don’t use for

* Long-lived applications
* Scenarios where the {{< param "PRODUCT_NAME" >}} size grows so large it can become a noisy neighbor
* Scenarios where the {{< param "PRODUCT_NAME" >}} deployment size grows so large it can become a noisy neighbor

<!-- ToDo: Check URL path -->
[hashmod sharding]: https://grafana.com/docs/agent/latest/static/operation-guide/
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/tasks/configure/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ weight: 90

# Configure {{% param "FULL_PRODUCT_NAME" %}}

You can configure {{< param "PRODUCT_NAME" >}} after it is [installed][Install].
You can configure {{< param "PRODUCT_NAME" >}} after it is [installed][].
The default configuration file for {{< param "PRODUCT_NAME" >}} is located at:

* Linux: `/etc/alloy/config.alloy`
Expand All @@ -19,4 +19,4 @@ This section includes information that helps you configure {{< param "PRODUCT_NA

{{< section >}}

[Install]: ../../get-started/install/
[installed]: ../../get-started/install/
8 changes: 4 additions & 4 deletions docs/sources/tasks/configure/configure-windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ weight: 500

To configure {{< param "PRODUCT_NAME" >}} on Windows, perform the following steps:

1. Edit the default configuration file at `C:\Program Files\Grafana Alloy\config.alloy`.
1. Edit the default configuration file at `%PROGRAMFILES%\GrafanaLabs\Alloy\config.alloy`.

1. Restart the {{< param "PRODUCT_NAME" >}} service:

Expand All @@ -30,8 +30,8 @@ By default, the {{< param "PRODUCT_NAME" >}} service will launch and pass the
following arguments to the {{< param "PRODUCT_NAME" >}} binary:

* `run`
* `C:\Program Files\Grafana Alloy\config.alloy`
* `--storage.path=C:\ProgramData\Grafana Alloy\data`
* `%PROGRAMFILES%\GrafanaLabs\Alloy\config.alloy`
* `--storage.path=%PROGRAMDATA%\GrafanaLabs\Alloy\data`

To change the set of command-line arguments passed to the {{< param "PRODUCT_NAME" >}} binary, perform the following steps:

Expand All @@ -41,7 +41,7 @@ To change the set of command-line arguments passed to the {{< param "PRODUCT_NAM

1. Type `regedit` and click **OK**.

1. Navigate to the key at the path `HKEY_LOCAL_MACHINE\SOFTWARE\Grafana\Grafana Alloy`.
1. Navigate to the key at the path `HKEY_LOCAL_MACHINE\SOFTWARE\GrafanaLabs\Alloy`.

1. Double-click on the value called **Arguments***.

Expand Down
Loading

0 comments on commit bdf2636

Please sign in to comment.