View code
+ +View the Code
+The code for this repository is hosted on GitHub. Click the "View Code on GitHub" Link above.
+From 43822c66e0d37dce2a7c88ceb49f98de2e8a6f20 Mon Sep 17 00:00:00 2001
From: Adam Gardner Support Policy This is a demo project created by the Developer Relations team at Dynatrace, showcasing integrations with open source technologies. Support is provided via GitHub issues only. The materials provided in this repository are offered "as-is" without any warranties, express or implied. Use them at your own risk. View the Code The code for this repository is hosted on GitHub. Click the "View Code on GitHub" Link above. In this hands-on demo, you will send logs from the OpenTelemetry demo application to Dynatrace. You will artificially create a problem in the application which Dynatrace DAVIS will detect and thus raise a problem report based on the Observability data. The logs include span and trace IDs meaning you will be easily able to drill between signal types to see logs in the context of the distributed trace and vice versa. In this hands-on demo, you will send logs from the OpenTelemetry demo application to Dynatrace. You will artificially create a problem in the application which Dynatrace DAVIS will detect and thus raise a problem report based on the Observability data. The logs include span and trace IDs meaning you will be easily able to drill between signal types to see logs in the context of the distributed trace and vice versa. Tip Right click and \"open image in new tab\" to see large image You will release a new feature into production. For demo purposes, this new feature intentionally introduces failure into the system. First you will inform Dynatrace that a change is incoming. This will be done by sending a After a few moments, the error will occur. The This demo uses the OpenTelemetry demo application and the Dynatrace OpenTelemetry collector distribution (why might I want to use the Dynatrace OTEL Collector?). Tip This step is optional because there is a load generator already running. Observability data will be flowing into Dynatrace. Expose the user interface on port 8080 by port-forwarding: Go to the You should see the OpenTelemetry demo application. To cleanup resources, go to https://github.com/codespaces and delete the codespace. You may also want to deactivate or delete the API token. You must have access to a Dynatrace SaaS environment.Sign up here Save the Dynatrace environment URL: The generic format is: For example: Using Dynatrace to Detect Problems in Logs#
+CUSTOM_CONFIGURATION
event to Dynatrace. Then the feature will be enabled by toggling a feature flag.ERROR
logs flowing into Dynatrace will trigger the problem.
"},{"location":"access-ui/","title":"4. Access User Interface","text":""},{"location":"access-ui/#access-the-demo-user-interface","title":"Access The Demo User Interface","text":"kubectl -n default port-forward svc/my-otel-demo-frontendproxy 8080:8080\n
Ports
tab, right click the port 8080
and choose Open in Browser
.
"},{"location":"cleanup/","title":"8. Cleanup","text":"
"},{"location":"getting-started/","title":"Getting Started","text":""},{"location":"getting-started/#dynatrace-environment","title":"Dynatrace Environment","text":"
.apps.
in the URLhttps://<EnvironmentID>.<Environment>.<URL>\n
https://abc12345.live.dynatrace.com\n
Info
As the developer responsible for the cartservice, if problems occur, you're the best person to know how to resolve the issue.
To help your colleagues, you have prebuilt a notebook which will be useful as a runbook if / when problems occur.
You want to make this notebook automatically available whenever problems with the cartservice
occur.
Download the file Redis Troubleshooting.json and save to your computer.
In Dynatrace:
ctrl + k
. Search for notebooks
Upload
button at the top of the pageWarning
Your environment and notebook IDs will be different.
"},{"location":"getting-started/#install-new-problems-app","title":"Install New Problems App","text":"In Dynatrace:
ctrl + k
. Search for Hub
.Problems
app and click Install
In Dynatrace:
ctrl + k
. Search for OpenPipeline
. Open the appLogs
is selected and select the Pipelines
tab+ Pipeline
to create a new log ingest pipeline.Log Errors
Data extraction
tab and add a new Davis event
processor
Matching condition
to true
(this means any log line flowing through the pipeline will alert)Event name
to:[{priority}][{deployment.release_stage}][{deployment.release_product}][{dt.owner}] {alertmessage}\n
Event description
to:{supportInfo} - Log line: {content}\n
event.type
property to:ERROR_EVENT\n
Add 5 new properties:
dt.owner
with value: {dt.owner}
dt.cost.costcenter
with value: {dt.cost.costcenter}
dt.cost.product
with value: {dt.cost.product}
deployment.release_product
with value: {deployment.release_product}
deployment.release_stage
with value: {deployment.release_stage}
Save it!
Don't forget to click Save
to save the pipeline
Create a dynamic routing rule to tell Dynatrace to redirect only certain logs through the Logs Errors pipeline.
Dynamic routing
tab+ Dynamic route
Matching condition
to:isNotNull(alertmessage) and\nisNotNull(priority) and\npriority == \"1\"\n
Add
Save it!
Don't forget to click Save
to save the dynamic route
Success
The pipeline is configured.
Logs flowing into Dynatrace with an alertmessage
field and a priority of \"1\"
will be processed via your custom pipeline.
Those log lines will raise a custom problem in Dynatrace where the problem title is:
[{priority}][{deployment.release_stage}][{deployment.release_product}][{dt.owner}] {alertmessage}
The above needs some explanation because there's a lot of \"magic\" happening.
This will be explained after the demo is started; while you wait for things to initialise.
"},{"location":"getting-started/#create-api-token","title":"Create API Token","text":"In Dynatrace:
ctrl + k
. Search for access tokens
.logs.ingest
metrics.ingest
openTelemetryTrace.ingest
events.ingest
logs.ingest
, metrics.ingest
and openTelemetryTrace.ingest
are required to send the relevant telemetry data into Dynatraceevents.ingest
is required to send the CUSTOM_CONFIGURATION
event into DynatraceYou've done the hard work! It is time to spin up the demo environment.
Click this button to open the demo environment. This will open in a new tab.
Create codespace
Grab a Coffee
Everything is installing. This can take a while.
While you wait, the section below will explain what you've just configured and how it works.
The OpenTelemetry demo and the Dynatrace collector will be installed automatically.
The Dynatrace details you provided during startup will be encrypted, stored in GitHub secrets and made available as environment variables.
They will also be stored in a Kubernetes secret dynatrace-otelcol-dt-api-credentials
Tip
Type printenv
to see all environment variables set by GitHub.
OpenTelemetry Experts Need Not Apply
If you already understand OpenTelemetry, the collector, OTTL and are comfortable reading the collector configuration file, you can probably skip this section.
The pipeline setup in the previous section contained references to lots of fields such as priority
, alertmessage
and dt.owner
.
How did all of those fields get there? Remember, this demo does not modify any of the application code.
"},{"location":"installation-explained/#1-pod-annotations","title":"1. Pod Annotations","text":"First, the developer adds additional, custom annotations to the microservice they are interested in. In this case, the cartservice
.
They do this by adding some Key/Value pairs to the podAnnotations
(see otel-demo-values.yaml).
During initialisation, the codespace replaces the placeholder text with your tenant details and notebook ID (see post-create.sh).
It is important to realise that the developer is in full control of these K/V pairs. They can add as many or few as they wish.
You can see these annotations with this command:
kubectl describe pod -l app.kubernetes.io/component=cartservice\n
"},{"location":"installation-explained/#2-collector-enriches-logs","title":"2. Collector Enriches Logs","text":"Logs are sent out of the cartservice via OpenTelemetry Protocol (OTLP) to the collector.
As log lines flow through the collector pipeline, the logs are processed by two processors
: The k8sattributes and transform processors.
The k8sattributes interacts with the Kubernetes API to extract important k8s metadata such as pod names, deployment names, node names and other topology-relevant information.
This processor also pulls selected annotations from pods. Namely, the custom annotations that were set in step one.
Info
Notice also that the key
s are renamed in realtime to Dynatrace-relevant keys. (eg. ownedBy
becomes dt.owner
).
All of this information: the k8s metadata and custom annotations are dynamically added to each log line and span as it flows through the collector.
Thus this log line:
2024-10-01 10:00:00 INFO A log message\n
May become:
2024-10-01 10:00:00 INFO A log message dt.owner=Susan k8s.pod-name=cart-service-abc1234 ...\n
"},{"location":"installation-explained/#transform-processor","title":"transform Processor","text":"The transform processor modifies the telemetry (eg. log line content and attributes) based on the OpenTelemetry Transformation Language.
The collector creates new dynatrace-relevant attributes based on existing attributes. For example, taking k8s.deployment.name
and creating a new attribute called dt.kubernetes.workload.name
with the same value.
Two brand new attributes alertmessage
and priority
are dynamically added (see here) based on conditions we specify.
alertmessage
is intended as a place for the developer to indicate a human readable alert message.priority
is intended as a place for the developer to indicate the importance of this error.- set(attributes[\"alertmessage\"], \"Critical Redis connection error!\")\n where resource.attributes[\"service.name\"] == \"cartservice\"\n and resource.attributes[\"deployment.release_stage\"] == \"production\"\n and IsMatch(body, \"(?i)wasn't able to connect to redis.*\")\n\n- set(attributes[\"priority\"], \"1\")\n where resource.attributes[\"service.name\"] == \"cartservice\"\n and resource.attributes[\"deployment.release_stage\"] == \"production\"\n and IsMatch(body, \"(?i)wasn't able to connect to redis.*\")\n
OpenPipeline Integration
The previous steps demonstrate how the logs are enriched with additional metadata.
OpenPipeline can then use these fields as logs flow into Dynatrace.
"},{"location":"installation-explained/#wait-for-system","title":"Wait for System","text":"The system may still be loading.
Wait until the Running postCreate command
loading spinner disappears.
Wait here until the terminal prompt looks like this (your username will differ).
"},{"location":"installation-explained/#wait-for-application","title":"Wait for Application","text":"The Kubernetes cluster is available and the application is starting.
Wait for all pods to be Ready (can take up to 10mins)
kubectl wait --for condition=Ready pod --timeout=10m --all\n
The command will appear to hang until all pods are available.
When all pods are running, the output will look like this:
pod/dynatrace-collector-opentelemetry-collector-******-**** condition met\npod/my-otel-demo-accountingservice-******-**** condition met\npod/my-otel-demo-adservice-******-**** condition met\npod/my-otel-demo-cartservice-******-**** condition met\npod/my-otel-demo-checkoutservice-******-**** condition met\npod/my-otel-demo-currencyservice-******-**** condition met\npod/my-otel-demo-emailservice-******-**** condition met\npod/my-otel-demo-flagd-******-**** condition met\npod/my-otel-demo-frauddetectionservice-******-**** condition met\npod/my-otel-demo-frontend-******-**** condition met\npod/my-otel-demo-frontendproxy-******-**** condition met\npod/my-otel-demo-imageprovider-******-**** condition met\npod/my-otel-demo-kafka-******-**** condition met\npod/my-otel-demo-loadgenerator-******-**** condition met\npod/my-otel-demo-paymentservice-******-**** condition met\npod/my-otel-demo-productcatalogservice-******-**** condition met\npod/my-otel-demo-prometheus-server-******-**** condition met\npod/my-otel-demo-quoteservice-******-**** condition met\npod/my-otel-demo-recommendationservice-******-**** condition met\npod/my-otel-demo-shippingservice-******-**** condition met\npod/my-otel-demo-valkey-******-**** condition met\n
The application is running correctly. It is time to introduce a change into the system.
This simulates releasing new functionality to your users in production.
"},{"location":"introduce-change/#inform-dynatrace","title":"Inform Dynatrace","text":"First, inform Dynatrace that a change is about to occur. Namely, you are going to make a change to the my-otel-demo-cartservice
service by changing the cartServiceFailure
feature flag from off
to on
.
Tell Dynatrace about the upcoming change by sending an event (note: This event does not actually make the change; you need to do this).
Run the following:
./runtimeChange.sh my-otel-demo-cartservice cartServiceFailure on\n
Refresh the my-otel-demo-cartservice
page and near the bottom you should see the configuration change event.
Open this file: flags.yaml
Change the defaultValue
of cartServiceFailure
from \"off\"
to \"on\"
(scroll to line 75
)
Now apply the change by running this command:
kubectl apply -f $CODESPACE_VSCODE_FOLDER/flags.yaml\n
You should see:
configmap/my-otel-demo-flagd-config configured\n
Be Patient
The application will now generate errors when emptying the users cart. It will do this 1/10th of the time, so be patient, it can take a few moments for the errors to occur.
"},{"location":"introduce-change/#generate-your-own-traffic","title":"Generate Your Own Traffic","text":"There is a load generator running, but you can generate traffic by accessing the site.
See access user interface
Repeatedly add an item to your cart, go to the cart and empty it. Hope you're \"lucky\" that you generate a backend failure.
"},{"location":"introduce-change/#open-problems-app","title":"Open Problems App","text":"In Dynatrace:
ctrl + k
. Search for problems
Wait for the problem to appear.
You can also open the my-otel-demo-cartservice
Service screen to monitor for failures.
ctrl + k
. Search for Services
my-otel-demo-cartservice
Failed requests
chartSee here.
"},{"location":"review-problem/","title":"7. Review Problem","text":"Tip
Right click each image and \"Open image in new tab\" to see a larger version.
After a few moments, DAVIS will detect the issue and create a problem.
Question
Let's see what Dynatrace can tell us about this issue...
Press ctrl + k
and search for Problems
this will open the problems screen.
You should see a problem titled: Redis connection errors
Opening the problem record, you will see that it has effected one entity: the my-otel-demo-cartservice
.
Navigate to the Logs
panel. Click Run query
next to Show x errors
(your number of errors may differ from the screenshot)
Expand the log entry and notice you have some key metadata available:
host.name
(which equates to the container name)loglevel
ie. ERROR
span_id
and trace_id
dt.owner
dt.cost.product
and dt.cost.costcenter
Now click Show surrounding logs
this shows ALL
log lines with the same trace_id
.
You can also choose based on topology
to see the error in context of all other logs on that service at the time of the error.
This means you can see precisely what led up to the failure. In this case:
Notice that an Error status code and detailed message is also available:
statuscode
is FailedPrecondition
detail
provides an error message: Can't access cart storage. System.ApplicationException: Wasn't able to connect to redis...
detail
field also provides a reference to the line of code (LOC) where this error occured.In this demo application logs, spans and traces are all correctly instrumented with the span_id
and trace_id
field. Meaning logs can be correlated and linked to traces.
Let's navigate from the log line to the trace view to get a wider view of the error and what hte user was trying to do during this action.
trace_id
. This should open the Explore
context menu.Open field with
(open record with
also opens the trace but \"jumps\" you down the trace to the error location)Distributed traces
app
The trace view gives a deeper, more contextual view of what we've already seen from the logs.
The user tries to place an order, there are currency conversions and quotations occurring. Finally the EmptyCart
method is called, which fails.
Recall that the developer provided us with a handy runbook.
Navigate back to the problem and notice the problem description contains a link to the Ops runbook.
Follow the link to the runbook.
"},{"location":"review-problem/#immediate-action","title":"Immediate Action","text":"The first section of the runbook provides clear instructions on what to do and who to contact.
"},{"location":"review-problem/#chart-1-error-trend","title":"Chart 1: Error Trend","text":"Re-run sections
You may need to re-run the sections to refresh the data. Just click each chart and click the Run
button
The first chart shows a increased failure rate for the cartservice
.
OK, we're on to something...
DAVIS told us (and our investigation confirmed) that the problem originated in the cartservice
.
We know that the problem was caused by a failure to connect to Redis. But what caused that error? Did something change?
"},{"location":"review-problem/#chart-2-change-caused-the-failure","title":"Chart 2: Change Caused the Failure","text":"Chart two shows both configuration events and problems on the same chart.
Change is the cause of most failures
Something changed on the cartservice
immediately prior to an issue occuring.
Thanks to the \"configuration changed\" event we have all necessary information to understand the true root cause.
\ud83c\udf89Congratulations \ud83c\udf89
You have successfully completed this Observability Lab. Continue below to clean up your environment.
In Dynatrace, press ctrl + k
and search for Services
. Dynatrace creates service entities based on the incoming span data. The logs are also available for some services
In this demo, you will be focussing on the Cart
which does have logs and is written in .NET
.
You can also query data via notebooks and dashboards (ctrl + k
and search for notebooks
or dashboards
).
For example, to validate logs are available for cartservice
, use one of the following methods:
logs
app.+
icon and add a filter for service.name
with a value cartservice
cartservice
logsmy-otel-demo-cartservice
service screenRun query
on the logs panelnotebooks
appfetch logs\n| filter service.name == \"cartservice\"\n| limit 10\n
Content here about what the user should do, where they should and what they could learn next.
"},{"location":"snippets/disclaimer/","title":"Disclaimer","text":"Support Level
This repository contains demo projects created by the Developer Relations team at Dynatrace, showcasing integrations with open source technologies.
Please note the following:
No Official Support: These demos are not covered under any service level agreements (SLAs) or official support channels at Dynatrace, including Dynatrace One.
Community Support Only: Any support for these demos is provided exclusively through the Dynatrace Community.
No Warranty: The materials provided in this repository are offered \"as-is\" without any warranties, express or implied. Use them at your own risk.
For any questions or discussions, please engage with us on the Dynatrace Community.
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Using Dynatrace to Detect Problems in Logs","text":"Support Policy
This is a demo project created by the Developer Relations team at Dynatrace, showcasing integrations with open source technologies.
Support is provided via GitHub issues only. The materials provided in this repository are offered \"as-is\" without any warranties, express or implied. Use them at your own risk.
View the Code
The code for this repository is hosted on GitHub. Click the \"View Code on GitHub\" Link above.
In this hands-on demo, you will send logs from the OpenTelemetry demo application to Dynatrace.
You will artificially create a problem in the application which Dynatrace DAVIS will detect and thus raise a problem report based on the Observability data.
The logs include span and trace IDs meaning you will be easily able to drill between signal types to see logs in the context of the distributed trace and vice versa.
Tip
Right click and \"open image in new tab\" to see large image
"},{"location":"#how-is-the-problem-created","title":"How is the problem created?","text":"You will release a new feature into production. For demo purposes, this new feature intentionally introduces failure into the system.
First you will inform Dynatrace that a change is incoming. This will be done by sending a CUSTOM_CONFIGURATION
event to Dynatrace. Then the feature will be enabled by toggling a feature flag.
After a few moments, the error will occur. The ERROR
logs flowing into Dynatrace will trigger the problem.
This demo uses the OpenTelemetry demo application and the Dynatrace OpenTelemetry collector distribution (why might I want to use the Dynatrace OTEL Collector?).
"},{"location":"#logical-flow","title":"Logical Flow","text":""},{"location":"#compatibility","title":"Compatibility","text":"Deployment Tutorial Compatible Dynatrace Managed \u274c Dynatrace SaaS \u2714\ufe0fTip
This step is optional because there is a load generator already running. Observability data will be flowing into Dynatrace.
Expose the user interface on port 8080 by port-forwarding:
kubectl -n default port-forward svc/my-otel-demo-frontendproxy 8080:8080\n
Go to the Ports
tab, right click the port 8080
and choose Open in Browser
.
You should see the OpenTelemetry demo application.
To cleanup resources, go to https://github.com/codespaces and delete the codespace.
You may also want to deactivate or delete the API token.
You must have access to a Dynatrace SaaS environment.Sign up here
Save the Dynatrace environment URL:
.apps.
in the URLThe generic format is:
https://<EnvironmentID>.<Environment>.<URL>\n
For example:
https://abc12345.live.dynatrace.com\n
"},{"location":"getting-started/#custom-runbook","title":"Custom Runbook","text":"Info
As the developer responsible for the cartservice, if problems occur, you're the best person to know how to resolve the issue.
To help your colleagues, you have prebuilt a notebook which will be useful as a runbook if / when problems occur.
You want to make this notebook automatically available whenever problems with the cartservice
occur.
Download the file Redis Troubleshooting.json and save to your computer.
In Dynatrace:
ctrl + k
. Search for notebooks
Upload
button at the top of the pageWarning
Your environment and notebook IDs will be different.
"},{"location":"getting-started/#install-new-problems-app","title":"Install New Problems App","text":"In Dynatrace:
ctrl + k
. Search for Hub
.Problems
app and click Install
In Dynatrace:
ctrl + k
. Search for OpenPipeline
. Open the appLogs
is selected and select the Pipelines
tab+ Pipeline
to create a new log ingest pipeline.Log Errors
Data extraction
tab and add a new Davis event
processor
Matching condition
to true
(this means any log line flowing through the pipeline will alert)Event name
to:[{priority}][{deployment.release_stage}][{deployment.release_product}][{dt.owner}] {alertmessage}\n
Event description
to:{supportInfo} - Log line: {content}\n
event.type
property to:ERROR_EVENT\n
Add 5 new properties:
dt.owner
with value: {dt.owner}
dt.cost.costcenter
with value: {dt.cost.costcenter}
dt.cost.product
with value: {dt.cost.product}
deployment.release_product
with value: {deployment.release_product}
deployment.release_stage
with value: {deployment.release_stage}
Save it!
Don't forget to click Save
to save the pipeline
Create a dynamic routing rule to tell Dynatrace to redirect only certain logs through the Logs Errors pipeline.
Dynamic routing
tab+ Dynamic route
Matching condition
to:isNotNull(alertmessage) and\nisNotNull(priority) and\npriority == \"1\"\n
Add
Save it!
Don't forget to click Save
to save the dynamic route
Success
The pipeline is configured.
Logs flowing into Dynatrace with an alertmessage
field and a priority of \"1\"
will be processed via your custom pipeline.
Those log lines will raise a custom problem in Dynatrace where the problem title is:
[{priority}][{deployment.release_stage}][{deployment.release_product}][{dt.owner}] {alertmessage}
The above needs some explanation because there's a lot of \"magic\" happening.
This will be explained after the demo is started; while you wait for things to initialise.
"},{"location":"getting-started/#create-api-token","title":"Create API Token","text":"In Dynatrace:
ctrl + k
. Search for access tokens
.logs.ingest
metrics.ingest
openTelemetryTrace.ingest
events.ingest
logs.ingest
, metrics.ingest
and openTelemetryTrace.ingest
are required to send the relevant telemetry data into Dynatraceevents.ingest
is required to send the CUSTOM_CONFIGURATION
event into DynatraceYou've done the hard work! It is time to spin up the demo environment.
Click this button to open the demo environment. This will open in a new tab.
Create codespace
Grab a Coffee
Everything is installing. This can take a while.
While you wait, the section below will explain what you've just configured and how it works.
The OpenTelemetry demo and the Dynatrace collector will be installed automatically.
The Dynatrace details you provided during startup will be encrypted, stored in GitHub secrets and made available as environment variables.
They will also be stored in a Kubernetes secret dynatrace-otelcol-dt-api-credentials
Tip
Type printenv
to see all environment variables set by GitHub.
OpenTelemetry Experts Need Not Apply
If you already understand OpenTelemetry, the collector, OTTL and are comfortable reading the collector configuration file, you can probably skip this section.
The pipeline setup in the previous section contained references to lots of fields such as priority
, alertmessage
and dt.owner
.
How did all of those fields get there? Remember, this demo does not modify any of the application code.
"},{"location":"installation-explained/#1-pod-annotations","title":"1. Pod Annotations","text":"First, the developer adds additional, custom annotations to the microservice they are interested in. In this case, the cartservice
.
They do this by adding some Key/Value pairs to the podAnnotations
(see otel-demo-values.yaml).
During initialisation, the codespace replaces the placeholder text with your tenant details and notebook ID (see post-create.sh).
It is important to realise that the developer is in full control of these K/V pairs. They can add as many or few as they wish.
You can see these annotations with this command:
kubectl describe pod -l app.kubernetes.io/component=cartservice\n
"},{"location":"installation-explained/#2-collector-enriches-logs","title":"2. Collector Enriches Logs","text":"Logs are sent out of the cartservice via OpenTelemetry Protocol (OTLP) to the collector.
As log lines flow through the collector pipeline, the logs are processed by two processors
: The k8sattributes and transform processors.
The k8sattributes interacts with the Kubernetes API to extract important k8s metadata such as pod names, deployment names, node names and other topology-relevant information.
This processor also pulls selected annotations from pods. Namely, the custom annotations that were set in step one.
Info
Notice also that the key
s are renamed in realtime to Dynatrace-relevant keys. (eg. ownedBy
becomes dt.owner
).
All of this information: the k8s metadata and custom annotations are dynamically added to each log line and span as it flows through the collector.
Thus this log line:
2024-10-01 10:00:00 INFO A log message\n
May become:
2024-10-01 10:00:00 INFO A log message dt.owner=Susan k8s.pod-name=cart-service-abc1234 ...\n
"},{"location":"installation-explained/#transform-processor","title":"transform Processor","text":"The transform processor modifies the telemetry (eg. log line content and attributes) based on the OpenTelemetry Transformation Language.
The collector creates new dynatrace-relevant attributes based on existing attributes. For example, taking k8s.deployment.name
and creating a new attribute called dt.kubernetes.workload.name
with the same value.
Two brand new attributes alertmessage
and priority
are dynamically added (see here) based on conditions we specify.
alertmessage
is intended as a place for the developer to indicate a human readable alert message.priority
is intended as a place for the developer to indicate the importance of this error.- set(attributes[\"alertmessage\"], \"Critical Redis connection error!\")\n where resource.attributes[\"service.name\"] == \"cartservice\"\n and resource.attributes[\"deployment.release_stage\"] == \"production\"\n and IsMatch(body, \"(?i)wasn't able to connect to redis.*\")\n\n- set(attributes[\"priority\"], \"1\")\n where resource.attributes[\"service.name\"] == \"cartservice\"\n and resource.attributes[\"deployment.release_stage\"] == \"production\"\n and IsMatch(body, \"(?i)wasn't able to connect to redis.*\")\n
OpenPipeline Integration
The previous steps demonstrate how the logs are enriched with additional metadata.
OpenPipeline can then use these fields as logs flow into Dynatrace.
"},{"location":"installation-explained/#wait-for-system","title":"Wait for System","text":"The system may still be loading.
Wait until the Running postCreate command
loading spinner disappears.
Wait here until the terminal prompt looks like this (your username will differ).
"},{"location":"installation-explained/#wait-for-application","title":"Wait for Application","text":"The Kubernetes cluster is available and the application is starting.
Wait for all pods to be Ready (can take up to 10mins)
kubectl wait --for condition=Ready pod --timeout=10m --all\n
The command will appear to hang until all pods are available.
When all pods are running, the output will look like this:
pod/dynatrace-collector-opentelemetry-collector-******-**** condition met\npod/my-otel-demo-accountingservice-******-**** condition met\npod/my-otel-demo-adservice-******-**** condition met\npod/my-otel-demo-cartservice-******-**** condition met\npod/my-otel-demo-checkoutservice-******-**** condition met\npod/my-otel-demo-currencyservice-******-**** condition met\npod/my-otel-demo-emailservice-******-**** condition met\npod/my-otel-demo-flagd-******-**** condition met\npod/my-otel-demo-frauddetectionservice-******-**** condition met\npod/my-otel-demo-frontend-******-**** condition met\npod/my-otel-demo-frontendproxy-******-**** condition met\npod/my-otel-demo-imageprovider-******-**** condition met\npod/my-otel-demo-kafka-******-**** condition met\npod/my-otel-demo-loadgenerator-******-**** condition met\npod/my-otel-demo-paymentservice-******-**** condition met\npod/my-otel-demo-productcatalogservice-******-**** condition met\npod/my-otel-demo-prometheus-server-******-**** condition met\npod/my-otel-demo-quoteservice-******-**** condition met\npod/my-otel-demo-recommendationservice-******-**** condition met\npod/my-otel-demo-shippingservice-******-**** condition met\npod/my-otel-demo-valkey-******-**** condition met\n
The application is running correctly. It is time to introduce a change into the system.
This simulates releasing new functionality to your users in production.
"},{"location":"introduce-change/#inform-dynatrace","title":"Inform Dynatrace","text":"First, inform Dynatrace that a change is about to occur. Namely, you are going to make a change to the my-otel-demo-cartservice
service by changing the cartServiceFailure
feature flag from off
to on
.
Tell Dynatrace about the upcoming change by sending an event (note: This event does not actually make the change; you need to do this).
Run the following:
./runtimeChange.sh my-otel-demo-cartservice cartServiceFailure on\n
Refresh the my-otel-demo-cartservice
page and near the bottom you should see the configuration change event.
Open this file: flags.yaml
Change the defaultValue
of cartServiceFailure
from \"off\"
to \"on\"
(scroll to line 75
)
Now apply the change by running this command:
kubectl apply -f $CODESPACE_VSCODE_FOLDER/flags.yaml\n
You should see:
configmap/my-otel-demo-flagd-config configured\n
Be Patient
The application will now generate errors when emptying the users cart. It will do this 1/10th of the time, so be patient, it can take a few moments for the errors to occur.
"},{"location":"introduce-change/#generate-your-own-traffic","title":"Generate Your Own Traffic","text":"There is a load generator running, but you can generate traffic by accessing the site.
See access user interface
Repeatedly add an item to your cart, go to the cart and empty it. Hope you're \"lucky\" that you generate a backend failure.
"},{"location":"introduce-change/#open-problems-app","title":"Open Problems App","text":"In Dynatrace:
ctrl + k
. Search for problems
Wait for the problem to appear.
You can also open the my-otel-demo-cartservice
Service screen to monitor for failures.
ctrl + k
. Search for Services
my-otel-demo-cartservice
Failed requests
chartSee here.
"},{"location":"review-problem/","title":"7. Review Problem","text":"Tip
Right click each image and \"Open image in new tab\" to see a larger version.
After a few moments, DAVIS will detect the issue and create a problem.
Question
Let's see what Dynatrace can tell us about this issue...
Press ctrl + k
and search for Problems
this will open the problems screen.
You should see a problem titled: Redis connection errors
Opening the problem record, you will see that it has effected one entity: the my-otel-demo-cartservice
.
Navigate to the Logs
panel. Click Run query
next to Show x errors
(your number of errors may differ from the screenshot)
Expand the log entry and notice you have some key metadata available:
host.name
(which equates to the container name)loglevel
ie. ERROR
span_id
and trace_id
dt.owner
dt.cost.product
and dt.cost.costcenter
Now click Show surrounding logs
this shows ALL
log lines with the same trace_id
.
You can also choose based on topology
to see the error in context of all other logs on that service at the time of the error.
This means you can see precisely what led up to the failure. In this case:
Notice that an Error status code and detailed message is also available:
statuscode
is FailedPrecondition
detail
provides an error message: Can't access cart storage. System.ApplicationException: Wasn't able to connect to redis...
detail
field also provides a reference to the line of code (LOC) where this error occured.In this demo application logs, spans and traces are all correctly instrumented with the span_id
and trace_id
field. Meaning logs can be correlated and linked to traces.
Let's navigate from the log line to the trace view to get a wider view of the error and what hte user was trying to do during this action.
trace_id
. This should open the Explore
context menu.Open field with
(open record with
also opens the trace but \"jumps\" you down the trace to the error location)Distributed traces
app
The trace view gives a deeper, more contextual view of what we've already seen from the logs.
The user tries to place an order, there are currency conversions and quotations occurring. Finally the EmptyCart
method is called, which fails.
Recall that the developer provided us with a handy runbook.
Navigate back to the problem and notice the problem description contains a link to the Ops runbook.
Follow the link to the runbook.
"},{"location":"review-problem/#immediate-action","title":"Immediate Action","text":"The first section of the runbook provides clear instructions on what to do and who to contact.
"},{"location":"review-problem/#chart-1-error-trend","title":"Chart 1: Error Trend","text":"Re-run sections
You may need to re-run the sections to refresh the data. Just click each chart and click the Run
button
The first chart shows a increased failure rate for the cartservice
.
OK, we're on to something...
DAVIS told us (and our investigation confirmed) that the problem originated in the cartservice
.
We know that the problem was caused by a failure to connect to Redis. But what caused that error? Did something change?
"},{"location":"review-problem/#chart-2-change-caused-the-failure","title":"Chart 2: Change Caused the Failure","text":"Chart two shows both configuration events and problems on the same chart.
Change is the cause of most failures
Something changed on the cartservice
immediately prior to an issue occuring.
Thanks to the \"configuration changed\" event we have all necessary information to understand the true root cause.
\ud83c\udf89Congratulations \ud83c\udf89
You have successfully completed this Observability Lab. Continue below to clean up your environment.
In Dynatrace, press ctrl + k
and search for Services
. Dynatrace creates service entities based on the incoming span data. The logs are also available for some services
In this demo, you will be focussing on the Cart
which does have logs and is written in .NET
.
You can also query data via notebooks and dashboards (ctrl + k
and search for notebooks
or dashboards
).
For example, to validate logs are available for cartservice
, use one of the following methods:
logs
app.+
icon and add a filter for service.name
with a value cartservice
cartservice
logsmy-otel-demo-cartservice
service screenRun query
on the logs panelnotebooks
appfetch logs\n| filter service.name == \"cartservice\"\n| limit 10\n
Content here about what the user should do, where they should and what they could learn next.
"},{"location":"snippets/disclaimer/","title":"Disclaimer","text":"Support Policy
This is a demo project created by the Developer Relations team at Dynatrace, showcasing integrations with open source technologies.
Support is provided via GitHub issues only. The materials provided in this repository are offered \"as-is\" without any warranties, express or implied. Use them at your own risk.
"},{"location":"snippets/view-code/","title":"View code","text":"View the Code
The code for this repository is hosted on GitHub. Click the \"View Code on GitHub\" Link above.
"}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index a36f145cee342b89bd14e2113486704c1d8c74f0..4a05d60ec304efe5c1dbaa4de6911377b28f3500 100755 GIT binary patch delta 13 Ucmb=gXP58h;7BOto5)@P02)jKvj6}9 delta 13 Ucmb=gXP58h;0QRvJ(0Zv02^xq?*IS* diff --git a/snippets/disclaimer/index.html b/snippets/disclaimer/index.html index 2fec2c9..dc82f15 100755 --- a/snippets/disclaimer/index.html +++ b/snippets/disclaimer/index.html @@ -482,13 +482,9 @@Support Level
-This repository contains demo projects created by the Developer Relations team at Dynatrace, showcasing integrations with open source technologies.
-Please note the following:
-No Official Support: These demos are not covered under any service level agreements (SLAs) or official support channels at Dynatrace, including Dynatrace One.
-Community Support Only: Any support for these demos is provided exclusively through the Dynatrace Community.
-No Warranty: The materials provided in this repository are offered "as-is" without any warranties, express or implied. Use them at your own risk.
-For any questions or discussions, please engage with us on the Dynatrace Community.
+Support Policy
+This is a demo project created by the Developer Relations team at Dynatrace, showcasing integrations with open source technologies.
+Support is provided via GitHub issues only. The materials provided in this repository are offered "as-is" without any warranties, express or implied. Use them at your own risk.
View the Code
+The code for this repository is hosted on GitHub. Click the "View Code on GitHub" Link above.
+