From e36c836da86d8d308d4485eb49d462dd60d59890 Mon Sep 17 00:00:00 2001 From: Colleen McGinnis Date: Fri, 1 Nov 2024 08:30:24 -0500 Subject: [PATCH] Fix MDX syntax, broken links, and use of the `observability` variable (#4462) * mdx syntax fixes * Apply suggestions from code review --- .../serverless/aiops/aiops-analyze-spikes.mdx | 2 +- .../aiops/aiops-detect-anomalies.mdx | 10 +-- .../aiops/aiops-detect-change-points.mdx | 4 +- docs/en/serverless/aiops/aiops.mdx | 2 +- .../create-elasticsearch-query-alert-rule.mdx | 10 +-- .../create-latency-threshold-alert-rule.mdx | 2 +- .../synthetic-monitor-status-alert.mdx | 64 ++++++++++++++----- ...telemetry-opentelemetry-native-support.mdx | 4 +- .../apm-agents/apm-agents-opentelemetry.mdx | 4 +- .../get-started-with-metrics.mdx | 2 +- .../infra-monitoring/host-metrics.mdx | 4 +- .../infra-monitoring/infra-monitoring.mdx | 2 +- docs/en/serverless/inventory.mdx | 24 +++---- .../logging/add-logs-service-name.mdx | 2 +- .../logging/correlate-application-logs.mdx | 2 +- docs/en/serverless/logging/log-monitoring.mdx | 2 +- .../logging/view-and-monitor-logs.mdx | 2 +- docs/en/serverless/observability-overview.mdx | 8 +-- .../create-an-observability-project.mdx | 6 +- .../quickstarts/k8s-logs-metrics.mdx | 3 +- .../monitor-hosts-with-elastic-agent.mdx | 8 ++- .../synthetics/synthetics-analyze.mdx | 2 +- .../synthetics-command-reference.mdx | 2 +- .../synthetics/synthetics-create-test.mdx | 2 +- .../technical-preview-limitations.mdx | 2 +- .../apm/guide/install-agents/net.mdx | 2 +- .../reference/lightweight-config/common.mdx | 2 +- 27 files changed, 109 insertions(+), 70 deletions(-) diff --git a/docs/en/serverless/aiops/aiops-analyze-spikes.mdx b/docs/en/serverless/aiops/aiops-analyze-spikes.mdx index 984e30d3fd..7cd105ec1d 100644 --- a/docs/en/serverless/aiops/aiops-analyze-spikes.mdx +++ b/docs/en/serverless/aiops/aiops-analyze-spikes.mdx @@ -11,7 +11,7 @@ tags: [ 'serverless', 'observability', 'how-to' ] {/* */} -Elastic ((observability)) provides built-in log rate analysis capabilities, +((observability)) provides built-in log rate analysis capabilities, based on advanced statistical methods, to help you find and investigate the causes of unusual spikes or drops in log rates. diff --git a/docs/en/serverless/aiops/aiops-detect-anomalies.mdx b/docs/en/serverless/aiops/aiops-detect-anomalies.mdx index 8798bfe592..33aa9cddad 100644 --- a/docs/en/serverless/aiops/aiops-detect-anomalies.mdx +++ b/docs/en/serverless/aiops/aiops-detect-anomalies.mdx @@ -11,7 +11,7 @@ import Roles from '../partials/roles.mdx' -The anomaly detection feature in Elastic ((observability)) automatically models the normal behavior of your time series data — learning trends, +The anomaly detection feature in ((observability)) automatically models the normal behavior of your time series data — learning trends, periodicity, and more — in real time to identify anomalies, streamline root cause analysis, and reduce false positives. To set up anomaly detection, you create and run anomaly detection jobs. @@ -47,7 +47,7 @@ To learn more about anomaly detection, refer to the [((ml))](((ml-docs))/ml-ad-o
-# Create and run an anomaly detection job +## Create and run an anomaly detection job 1. In your ((observability)) project, go to **AIOps** → **Anomaly detection**. 1. Click **Create anomaly detection job** (or **Create job** if other jobs exist). @@ -112,10 +112,10 @@ When the job runs, the ((ml)) features analyze the input stream of data, model i When an event occurs outside of the baselines of normal behavior, that event is identified as an anomaly. 1. After the job is started, click **View results**. -# View the results +## View the results After the anomaly detection job has processed some data, -you can view the results in Elastic ((observability)). +you can view the results in ((observability)). Depending on the capacity of your machine, @@ -227,7 +227,9 @@ The list includes maximum anomaly scores, which in this case are aggregated for There is also a total sum of the anomaly scores for each influencer. Use this list to help you narrow down the contributing factors and focus on the most anomalous entities. 1. Under **Anomaly timeline**, click a section in the swim lanes to obtain more information about the anomalies in that time period. + ![Anomaly Explorer showing swim lanes with anomaly selected ](../images/anomaly-explorer.png) + You can see exact times when anomalies occurred. If there are multiple detectors or metrics in the job, you can see which caught the anomaly. You can also switch to viewing this time series in the **Single Metric Viewer** by selecting **View series** in the **Actions** menu. diff --git a/docs/en/serverless/aiops/aiops-detect-change-points.mdx b/docs/en/serverless/aiops/aiops-detect-change-points.mdx index df74d82f8d..af5f54923b 100644 --- a/docs/en/serverless/aiops/aiops-detect-change-points.mdx +++ b/docs/en/serverless/aiops/aiops-detect-change-points.mdx @@ -9,12 +9,12 @@ tags: [ 'serverless', 'observability', 'how-to' ] {/* */} -The change point detection feature in Elastic ((observability)) detects distribution changes, +The change point detection feature in ((observability)) detects distribution changes, trend changes, and other statistically significant change points in time series data. Unlike anomaly detection, change point detection does not require you to configure a job or generate a model. Instead you select a metric and immediately see a visual representation that splits the time series into two parts, before and after the change point. -Elastic ((observability)) uses a [change point aggregation](((ref))/search-aggregations-change-point-aggregation.html) +((observability)) uses a [change point aggregation](((ref))/search-aggregations-change-point-aggregation.html) to detect change points. This aggregation can detect change points when: * a significant dip or spike occurs diff --git a/docs/en/serverless/aiops/aiops.mdx b/docs/en/serverless/aiops/aiops.mdx index 246c6847f3..dc278a718a 100644 --- a/docs/en/serverless/aiops/aiops.mdx +++ b/docs/en/serverless/aiops/aiops.mdx @@ -7,7 +7,7 @@ tags: [ 'serverless', 'observability', 'overview' ]

-The AIOps capabilities available in Elastic ((observability)) enable you to consume and process large observability data sets at scale, reducing the time and effort required to detect, understand, investigate, and resolve incidents. +The AIOps capabilities available in ((observability)) enable you to consume and process large observability data sets at scale, reducing the time and effort required to detect, understand, investigate, and resolve incidents. Built on predictive analytics and ((ml)), our AIOps capabilities require no prior experience with ((ml)). DevOps engineers, SREs, and security analysts can get started right away using these AIOps features with little or no advanced configuration: diff --git a/docs/en/serverless/alerting/create-elasticsearch-query-alert-rule.mdx b/docs/en/serverless/alerting/create-elasticsearch-query-alert-rule.mdx index 69fe4b943f..df17fe072a 100644 --- a/docs/en/serverless/alerting/create-elasticsearch-query-alert-rule.mdx +++ b/docs/en/serverless/alerting/create-elasticsearch-query-alert-rule.mdx @@ -50,8 +50,7 @@ For example: If you use [KQL](((kibana-ref))/kuery-query.html) or [Lucene](((kibana-ref))/lucene-query.html), you must specify a data view then define a text-based query. For example, `http.request.referrer: "https://example.com"`. - - If you use [ES|QL](((ref))/esql.html), you must provide a source command followed by an optional series of processing commands, separated by pipe characters (|). + If you use [ES|QL](((ref))/esql.html), you must provide a source command followed by an optional series of processing commands, separated by pipe characters (|). For example: ```sh @@ -66,6 +65,7 @@ For example: When : Specify how to calculate the value that is compared to the threshold. The value is calculated by aggregating a numeric field within the time window. The aggregation options are: `count`, `average`, `sum`, `min`, and `max`. When using `count` the document count is used and an aggregation field is not necessary. + Over or Grouped Over : Specify whether the aggregation is applied over all documents or split into groups using up to four grouping fields. If you choose to use grouping, it's a [terms](((ref))/search-aggregations-bucket-terms-aggregation.html) or [multi terms aggregation](((ref))/search-aggregations-bucket-multi-terms-aggregation.html); an alert will be created for each unique set of values when it meets the condition. @@ -176,7 +176,7 @@ You can also specify [variables common to all rules](((kibana-ref))/rule-action- For example, the message in an email connector action might contain: - ``` + ```txt Elasticsearch query rule '{{rule.name}}' is active: {{#context.hits}} @@ -191,7 +191,7 @@ You can also specify [variables common to all rules](((kibana-ref))/rule-action- For example: {/* NOTCONSOLE */} - ``` + ```txt {{#context.hits}} timestamp: {{_source.@timestamp}} day of the week: {{fields.day_of_week}} [^1] @@ -203,7 +203,7 @@ You can also specify [variables common to all rules](((kibana-ref))/rule-action- the [Mustache](https://mustache.github.io/) template array syntax is used to iterate over these values in your actions. For example: - ``` + ```txt {{#context.hits}} Labels: {{#fields.labels}} diff --git a/docs/en/serverless/alerting/create-latency-threshold-alert-rule.mdx b/docs/en/serverless/alerting/create-latency-threshold-alert-rule.mdx index 43dd1036a5..4c464e6866 100644 --- a/docs/en/serverless/alerting/create-latency-threshold-alert-rule.mdx +++ b/docs/en/serverless/alerting/create-latency-threshold-alert-rule.mdx @@ -22,7 +22,7 @@ These steps show how to use the **Alerts** UI. You can also create a latency threshold rule directly from any page within **Applications**. Click the **Alerts and rules** button, and select **Create threshold rule** and then **Latency**. When you create a rule this way, the **Name** and **Tags** fields will be prepopulated but you can still change these.
-To create your latency threshold rule:: +To create your latency threshold rule: 1. In your ((observability)) project, go to **Alerts**. 1. Select **Manage Rules** from the **Alerts** page, and select **Create rule**. diff --git a/docs/en/serverless/alerting/synthetic-monitor-status-alert.mdx b/docs/en/serverless/alerting/synthetic-monitor-status-alert.mdx index 38dc6de731..1dc8829e2d 100644 --- a/docs/en/serverless/alerting/synthetic-monitor-status-alert.mdx +++ b/docs/en/serverless/alerting/synthetic-monitor-status-alert.mdx @@ -86,35 +86,67 @@ You an also specify [variables common to all rules](((kibana-ref))/rule-action-v `context.checkedAt` - Timestamp of the monitor run. + + Timestamp of the monitor run. + `context.hostName` - Hostname of the location from which the check is performed. + + Hostname of the location from which the check is performed. + `context.lastErrorMessage` - Monitor last error message. + + Monitor last error message. + `context.locationId` - Location id from which the check is performed. + + Location id from which the check is performed. + `context.locationName` - Location name from which the check is performed. + + Location name from which the check is performed. + `context.locationNames` - Location names from which the checks are performed. + + Location names from which the checks are performed. + `context.message` - A generated message summarizing the status of monitors currently down. + + A generated message summarizing the status of monitors currently down. + `context.monitorId` - ID of the monitor. + + ID of the monitor. + `context.monitorName` - Name of the monitor. + + Name of the monitor. + `context.monitorTags` - Tags associated with the monitor. + + Tags associated with the monitor. + `context.monitorType` - Type (for example, HTTP/TCP) of the monitor. + + Type (for example, HTTP/TCP) of the monitor. + `context.monitorUrl` - URL of the monitor. + + URL of the monitor. + `context.reason` - A concise description of the reason for the alert. + + A concise description of the reason for the alert. + `context.recoveryReason` - A concise description of the reason for the recovery. + + A concise description of the reason for the recovery. + `context.status` - Monitor status (for example, "down"). + + Monitor status (for example, "down"). + `context.viewInAppUrl` - Open alert details and context in Synthetics app. + + Open alert details and context in Synthetics app. + \ No newline at end of file diff --git a/docs/en/serverless/apm-agents/apm-agents-opentelemetry-opentelemetry-native-support.mdx b/docs/en/serverless/apm-agents/apm-agents-opentelemetry-opentelemetry-native-support.mdx index d009cfeb62..ab639a4e1b 100644 --- a/docs/en/serverless/apm-agents/apm-agents-opentelemetry-opentelemetry-native-support.mdx +++ b/docs/en/serverless/apm-agents/apm-agents-opentelemetry-opentelemetry-native-support.mdx @@ -21,7 +21,7 @@ be sent directly to Elastic. ## Send data from an upstream OpenTelemetry Collector -Connect your OpenTelemetry Collector instances to Elastic ((observability)) using the OTLP exporter: +Connect your OpenTelemetry Collector instances to ((observability)) using the OTLP exporter: ```yaml receivers: [^1] @@ -64,7 +64,7 @@ service: [OTLP receiver](https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver), that forward data emitted by APM agents, or the [host metrics receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver). [^2]: We recommend using the [Batch processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md) and the [memory limiter processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/README.md). For more information, see [recommended processors](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/README.md#recommended-processors). [^3]: The [logging exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/loggingexporter) is helpful for troubleshooting and supports various logging levels, like `debug`, `info`, `warn`, and `error`. -[^4]: Elastic ((observability)) endpoint configuration. +[^4]: ((observability)) endpoint configuration. Elastic supports a ProtoBuf payload via both the OTLP protocol over gRPC transport [(OTLP/gRPC)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlpgrpc) and the OTLP protocol over HTTP transport [(OTLP/HTTP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlphttp). To learn more about these exporters, see the OpenTelemetry Collector documentation: diff --git a/docs/en/serverless/apm-agents/apm-agents-opentelemetry.mdx b/docs/en/serverless/apm-agents/apm-agents-opentelemetry.mdx index d336b63fc1..ed5b4c6670 100644 --- a/docs/en/serverless/apm-agents/apm-agents-opentelemetry.mdx +++ b/docs/en/serverless/apm-agents/apm-agents-opentelemetry.mdx @@ -82,11 +82,11 @@ You can set up an [OpenTelemetry Collector](https://opentelemetry.io/docs/collec
{/* Why you _would_ choose this approach */} -This approach works well when you need to instrument a technology that Elastic doesn't provide a solution for. For example, if you want to instrument C or C++ you could use the [OpenTelemetry C++ client](https://github.com/open-telemetry/opentelemetry-cpp). +This approach works well when you need to instrument a technology that Elastic doesn't provide a solution for. For example, if you want to instrument C or C((plus))((plus)) you could use the [OpenTelemetry C((plus))((plus)) client](https://github.com/open-telemetry/opentelemetry-cpp). {/* Other languages include erlang, lua, perl. */} {/* Why you would _not_ choose this approach */} -However, there are some limitations when using collectors and language SDKs built and maintainedby OpenTelemetry, including: +However, there are some limitations when using collectors and language SDKs built and maintained by OpenTelemetry, including: * Elastic can't provide implementation support on how to use upstream OpenTelemetry tools. * You won't have access to Elastic enterprise APM features. diff --git a/docs/en/serverless/infra-monitoring/get-started-with-metrics.mdx b/docs/en/serverless/infra-monitoring/get-started-with-metrics.mdx index 52ece073d9..aefb9438ab 100644 --- a/docs/en/serverless/infra-monitoring/get-started-with-metrics.mdx +++ b/docs/en/serverless/infra-monitoring/get-started-with-metrics.mdx @@ -12,7 +12,7 @@ import Roles from '../partials/roles.mdx' In this guide you'll learn how to onboard system metrics data from a machine or server, -then observe the data in Elastic ((observability)). +then observe the data in ((observability)). To onboard system metrics data: diff --git a/docs/en/serverless/infra-monitoring/host-metrics.mdx b/docs/en/serverless/infra-monitoring/host-metrics.mdx index d5807486fe..db97c8bbd3 100644 --- a/docs/en/serverless/infra-monitoring/host-metrics.mdx +++ b/docs/en/serverless/infra-monitoring/host-metrics.mdx @@ -398,7 +398,7 @@ However, any alerts that use the old definition will refer to the metric as "leg - **Network Inbound (RX) (legacy)** + **Network Inbound (RX) (legacy)** Number of bytes that have been received per second on the public interfaces of the hosts. @@ -406,7 +406,7 @@ However, any alerts that use the old definition will refer to the metric as "leg - **Network Outbound (TX) (legacy)** + **Network Outbound (TX) (legacy)** Number of bytes that have been sent per second on the public interfaces of the hosts. diff --git a/docs/en/serverless/infra-monitoring/infra-monitoring.mdx b/docs/en/serverless/infra-monitoring/infra-monitoring.mdx index 260f3620e9..18ae85e5d1 100644 --- a/docs/en/serverless/infra-monitoring/infra-monitoring.mdx +++ b/docs/en/serverless/infra-monitoring/infra-monitoring.mdx @@ -9,7 +9,7 @@ tags: [ 'serverless', 'observability', 'overview' ]
-Elastic ((observability)) allows you to visualize infrastructure metrics to help diagnose problematic spikes, +((observability)) allows you to visualize infrastructure metrics to help diagnose problematic spikes, identify high resource utilization, automatically discover and track pods, and unify your metrics with logs and APM data. diff --git a/docs/en/serverless/inventory.mdx b/docs/en/serverless/inventory.mdx index bdea5869b7..a4859a4a44 100644 --- a/docs/en/serverless/inventory.mdx +++ b/docs/en/serverless/inventory.mdx @@ -9,7 +9,7 @@ import Roles from './partials/roles.mdx'

-Inventory provides a single place to observe the status of your entire ecosystem of hosts, containers, and services at a glance, even just from logs. From there, you can monitor and understand the health of your entities, check what needs attention, and start your investigations. +Inventory provides a single place to observe the status of your entire ecosystem of hosts, containers, and services at a glance, even just from logs. From there, you can monitor and understand the health of your entities, check what needs attention, and start your investigations. The new Inventory requires the Elastic Entity Model (EEM). To learn more, refer to . @@ -28,7 +28,7 @@ Where `host.name` is set in `metrics-*`, `logs-*`, `filebeat-*`, and `metricbeat **Services** Where `service.name` is set in `filebeat*`, `logs-*`, `metrics-apm.service_transaction.1m*`, and `metrics-apm.service_summary.1m*` - + **Containers** Where `container.id` is set in `metrics-*`, `logs-*`, `filebeat-*`, and `metricbeat-*` @@ -47,9 +47,9 @@ Inventory allows you to: When you open the Inventory for the first time, you'll be asked to enable the EEM. Once enabled, the Inventory will be accessible to anyone with the appropriate privileges. - The Inventory feature can be completely disabled using the `observability:entityCentricExperience` flag in **Stack Management**. + The Inventory feature can be completely disabled using the `observability:entityCentricExperience` flag in **Stack Management**. - + 1. In the search bar, search for your entities by name or type, for example `entity.type:service`. @@ -77,21 +77,23 @@ Entities are added to the Inventory through one of the following approaches: **A ### Add data To add entities, select **Add data** from the left-hand navigation and choose one of the following onboarding journeys: -- Auto-detect logs and metrics + +Auto-detect logs and metrics - Detects hosts (with metrics and logs) + Detects hosts (with metrics and logs) -- Kubernetes +Kubernetes - Detects hosts, containers, and services + Detects hosts, containers, and services -- Elastic APM / OpenTelemetry / Synthetic Monitor +Elastic APM / OpenTelemetry / Synthetic Monitor - Detects services - + Detects services + + ### Associate existing service logs diff --git a/docs/en/serverless/logging/add-logs-service-name.mdx b/docs/en/serverless/logging/add-logs-service-name.mdx index d30b0f85e6..1c1cd49c73 100644 --- a/docs/en/serverless/logging/add-logs-service-name.mdx +++ b/docs/en/serverless/logging/add-logs-service-name.mdx @@ -46,7 +46,7 @@ Follow these steps to update your mapping: 1. Under **Field path**, select the existing field you want to map to the service name. 1. Select **Add field**. -For more ways to add a field to your mapping, refer to [add a field to an existing mapping](((ref))/explicit-mapping.html#add-field-mapping.html). +For more ways to add a field to your mapping, refer to [add a field to an existing mapping](((ref))/explicit-mapping.html#add-field-mapping). ## Additional ways to process data diff --git a/docs/en/serverless/logging/correlate-application-logs.mdx b/docs/en/serverless/logging/correlate-application-logs.mdx index f15c608d9f..d83bb7ddd6 100644 --- a/docs/en/serverless/logging/correlate-application-logs.mdx +++ b/docs/en/serverless/logging/correlate-application-logs.mdx @@ -70,7 +70,7 @@ without adding an ECS logger dependency or modifying the application. This feature is supported for the following ((apm-agent))s: -* [Ruby](((apm-ruby-ref))/log-reformat.html) +* [Ruby](((apm-ruby-ref))/configuration.html#config-log-ecs-formatting) * [Python](((apm-py-ref))/logs.html#log-reformatting) * [Java](((apm-java-ref))/logs.html#log-reformatting) diff --git a/docs/en/serverless/logging/log-monitoring.mdx b/docs/en/serverless/logging/log-monitoring.mdx index 12969f2b07..b8693a40b8 100644 --- a/docs/en/serverless/logging/log-monitoring.mdx +++ b/docs/en/serverless/logging/log-monitoring.mdx @@ -84,7 +84,7 @@ The following resources provide information on viewing and monitoring your logs: The **Data Set Quality** page provides an overview of your data sets and their quality. Use this information to get an idea of your overall data set quality, and find data sets that contain incorrectly parsed documents. -Monitor data sets +Monitor data sets ## Application logs diff --git a/docs/en/serverless/logging/view-and-monitor-logs.mdx b/docs/en/serverless/logging/view-and-monitor-logs.mdx index a1e1e68c4a..1c8f2dc9c9 100644 --- a/docs/en/serverless/logging/view-and-monitor-logs.mdx +++ b/docs/en/serverless/logging/view-and-monitor-logs.mdx @@ -87,4 +87,4 @@ From the log details of a document with ignored fields, as shown by the degraded Select **Data set details** to open the **Data Set Quality** page. Here you can monitor your data sets and investigate any issues. The **Data Set Details** page is also accessible from **Project settings** → **Management** → **Data Set Quality**. -Refer to Monitor data sets for more information. \ No newline at end of file +Refer to Monitor data sets for more information. \ No newline at end of file diff --git a/docs/en/serverless/observability-overview.mdx b/docs/en/serverless/observability-overview.mdx index 6bd2cc0e40..49871d18a3 100644 --- a/docs/en/serverless/observability-overview.mdx +++ b/docs/en/serverless/observability-overview.mdx @@ -5,19 +5,19 @@ description: Learn how to accelerate problem resolution with open, flexible, and tags: [ 'serverless', 'observability', 'overview' ] --- -

+

((observability)) provides granular insights and context into the behavior of applications running in your environments. -It's an important part of any system that you build and want to monitor. +It's an important part of any system that you build and want to monitor. Being able to detect and fix root cause events quickly within an observable system is a minimum requirement for any analyst. -Elastic ((observability)) provides a single stack to unify your logs, metrics, and application traces. +((observability)) provides a single stack to unify your logs, metrics, and application traces. Ingest your data directly to your Observability project, where you can further process and enhance the data, before visualizing it and adding alerts. - +
diff --git a/docs/en/serverless/projects/create-an-observability-project.mdx b/docs/en/serverless/projects/create-an-observability-project.mdx index 77bf3c65f3..dac7e20c1b 100644 --- a/docs/en/serverless/projects/create-an-observability-project.mdx +++ b/docs/en/serverless/projects/create-an-observability-project.mdx @@ -1,7 +1,7 @@ --- slug: /serverless/observability/create-an-observability-project -title: Create an Elastic ((observability)) project -description: Create a fully-managed Elastic ((observability)) project to monitor the health of your applications. +title: Create an ((observability)) project +description: Create a fully-managed ((observability)) project to monitor the health of your applications. tags: [ 'serverless', 'observability', 'how-to' ] --- @@ -11,7 +11,7 @@ import Roles from '../partials/roles.mdx'

-An ((observability)) project allows you to run Elastic ((observability)) in an autoscaled and fully-managed environment, +An ((observability)) project allows you to run ((observability)) in an autoscaled and fully-managed environment, where you don't have to manage the underlying ((es)) cluster or ((kib)) instances. 1. Navigate to [cloud.elastic.co](https://cloud.elastic.co/) and log in to your account. diff --git a/docs/en/serverless/quickstarts/k8s-logs-metrics.mdx b/docs/en/serverless/quickstarts/k8s-logs-metrics.mdx index 9ae1a9cf49..62ca87950e 100644 --- a/docs/en/serverless/quickstarts/k8s-logs-metrics.mdx +++ b/docs/en/serverless/quickstarts/k8s-logs-metrics.mdx @@ -25,7 +25,8 @@ The kubectl command installs the standalone Elastic Agent in your Kubernetes clu 1. Create a new ((observability)) project, or open an existing one. 1. In your ((observability)) project, go to **Add Data**. 1. Select **Monitor infrastructure**, and then select **Kubernetes**. - ![Kubernetes entry point](../images/quickstart-k8s-entry-point.png) + + ![Kubernetes entry point](../images/quickstart-k8s-entry-point.png) 1. To install the Elastic Agent on your host, copy and run the install command. You will use the kubectl command to download a manifest file, inject user's API key generated by Kibana, and create the Kubernetes resources. diff --git a/docs/en/serverless/quickstarts/monitor-hosts-with-elastic-agent.mdx b/docs/en/serverless/quickstarts/monitor-hosts-with-elastic-agent.mdx index 946d752b0d..72f1ecf6e1 100644 --- a/docs/en/serverless/quickstarts/monitor-hosts-with-elastic-agent.mdx +++ b/docs/en/serverless/quickstarts/monitor-hosts-with-elastic-agent.mdx @@ -36,8 +36,10 @@ The script also generates an ((agent)) configuration file that you can use with 1. In your ((observability)) project, go to **Add Data**. 1. Select **Collect and analyze logs**, and then select **Auto-detect logs and metrics**. 1. Copy the command that's shown. For example: - ![Quick start showing command for running auto-detection](../images/quickstart-autodetection-command.png) - You'll run this command to download the auto-detection script and scan your system for observability data. + + ![Quick start showing command for running auto-detection](../images/quickstart-autodetection-command.png) + + You'll run this command to download the auto-detection script and scan your system for observability data. 1. Open a terminal on the host you want to scan, and run the command. 1. Review the list of log files: - Enter `Y` to ingest all the log files listed. @@ -91,7 +93,7 @@ Metrics that indicate a possible problem are highlighted in red. ## Get value out of your data After using the dashboards to examine your data and confirm you've ingested all the host logs and metrics you want to monitor, -you can use Elastic ((observability)) to gain deeper insight into your data. +you can use ((observability)) to gain deeper insight into your data. For host monitoring, the following capabilities and features are recommended: diff --git a/docs/en/serverless/synthetics/synthetics-analyze.mdx b/docs/en/serverless/synthetics/synthetics-analyze.mdx index 8e93a1c0f0..9f0be4eb41 100644 --- a/docs/en/serverless/synthetics/synthetics-analyze.mdx +++ b/docs/en/serverless/synthetics/synthetics-analyze.mdx @@ -107,7 +107,7 @@ included the in retest on failure, you'll see retests listed in the **Test runs** table. Runs that are retests include a -rerun icon (image:images/icons/refresh.svg[Refresh icon]) next to the result badge. +rerun icon () next to the result badge. ![A failed run and a retest in the table of test runs in the Synthetics UI](../images/synthetics-retest.png) diff --git a/docs/en/serverless/synthetics/synthetics-command-reference.mdx b/docs/en/serverless/synthetics/synthetics-command-reference.mdx index 9c248069ae..fb55547acf 100644 --- a/docs/en/serverless/synthetics/synthetics-command-reference.mdx +++ b/docs/en/serverless/synthetics/synthetics-command-reference.mdx @@ -13,7 +13,7 @@ tags: [] ## `@elastic/synthetics` -Elastic uses the [@elastic/synthetics](https://www.npmjs.com/package/@elastic/synthetics[@elastic/synthetics) +Elastic uses the [@elastic/synthetics](https://www.npmjs.com/package/@elastic/synthetics) library to run synthetic browser tests and report the test results. The library also provides a CLI to help you scaffold, develop/run tests locally, and push tests to Elastic. diff --git a/docs/en/serverless/synthetics/synthetics-create-test.mdx b/docs/en/serverless/synthetics/synthetics-create-test.mdx index 0d41da570d..146679ff54 100644 --- a/docs/en/serverless/synthetics/synthetics-create-test.mdx +++ b/docs/en/serverless/synthetics/synthetics-create-test.mdx @@ -287,7 +287,7 @@ in Elastic Synthetics including: * The [`toHaveScreenshot`](https://playwright.dev/docs/api/class-locatorassertions#locator-assertions-to-have-screenshot-1) and [`toMatchSnapshot`](https://playwright.dev/docs/api/class-snapshotassertions) assertions - Captures done programmatically via https://playwright.dev/docs/api/class-page#page-screenshot[`screenshot`] or https://playwright.dev/docs/api/class-page#page-video[`video`] are not stored and are not shown in the Synthetics application. Providing a `path` will likely make the monitor fail due to missing permissions to write local files. + Captures done programmatically via [`screenshot`](https://playwright.dev/docs/api/class-page#page-screenshot) or [`video`](https://playwright.dev/docs/api/class-page#page-video) are not stored and are not shown in the Synthetics application. Providing a `path` will likely make the monitor fail due to missing permissions to write local files.
diff --git a/docs/en/serverless/technical-preview-limitations.mdx b/docs/en/serverless/technical-preview-limitations.mdx index 2cb71aac11..b4309fb55f 100644 --- a/docs/en/serverless/technical-preview-limitations.mdx +++ b/docs/en/serverless/technical-preview-limitations.mdx @@ -5,6 +5,6 @@ description: Review the limitations that apply to Elastic Observability projects tags: [ 'serverless', 'observability' ] --- - +

Currently, the maximum ingestion rate for the Managed Intake Service (APM and OpenTelemetry ingest) is 11.5 MB/s of uncompressed data (roughly 1TB/d uncompressed equivalent). Ingestion at a higher rate may experience rate limiting or ingest failures. \ No newline at end of file diff --git a/docs/en/serverless/transclusion/apm/guide/install-agents/net.mdx b/docs/en/serverless/transclusion/apm/guide/install-agents/net.mdx index 6bc72a769c..dbd7b686ea 100644 --- a/docs/en/serverless/transclusion/apm/guide/install-agents/net.mdx +++ b/docs/en/serverless/transclusion/apm/guide/install-agents/net.mdx @@ -11,7 +11,7 @@ You can add the Agent and specific instrumentations to a .NET application by referencing one or more of these packages and following the package documentation. * **Host startup hook**: On .NET Core 3.0+ or .NET 5+, the agent supports auto instrumentation without any code change and without -any recompilation of your projects. See [Zero code change setup on .NET Core](((apm-dotnet-ref))t/setup-dotnet-net-core.html) +any recompilation of your projects. See [Zero code change setup on .NET Core](((apm-dotnet-ref))/setup-dotnet-net-core.html) for more details.
**Learn more in the ((apm-agent)) reference** diff --git a/docs/en/serverless/transclusion/synthetics/reference/lightweight-config/common.mdx b/docs/en/serverless/transclusion/synthetics/reference/lightweight-config/common.mdx index 92e281c180..2a860da346 100644 --- a/docs/en/serverless/transclusion/synthetics/reference/lightweight-config/common.mdx +++ b/docs/en/serverless/transclusion/synthetics/reference/lightweight-config/common.mdx @@ -152,7 +152,7 @@ **`mode`**
- (`"any"` \| `"all"`) + (`"any"` or `"all"`)
One of two modes in which to run the monitor: