Skip to content

Commit

Permalink
Merge pull request #37 from stakater/review
Browse files Browse the repository at this point in the history
Add link check and spell check and update docs
  • Loading branch information
rasheedamir authored Sep 13, 2022
2 parents 62e52ab + 5d46d88 commit deba14c
Show file tree
Hide file tree
Showing 106 changed files with 783 additions and 588 deletions.
16 changes: 16 additions & 0 deletions .github/md_config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
"ignorePatterns": [
{
"pattern": "^(?!http).+"
},
{
"pattern": "^(?!https://stakater).+"
},
{
"pattern": "^(?!http://nexus).+"
},
{
"pattern": "^(?!https://nexus).+"
}
]
}
19 changes: 19 additions & 0 deletions .github/workflows/pull_request.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,25 @@ env:
DOCKER_FILE_PATH: Dockerfile

jobs:
link_check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Link check
uses: gaurav-nelson/github-action-markdown-link-check@v1
with:
config-file: .github/md_config.json
spell_check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Spell check
uses: errata-ai/[email protected]
with:
styles: https://github.com/errata-ai/write-good/releases/latest/download/write-good.zip
files: docs/content/sre
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
build:
runs-on: ubuntu-latest
if: "! contains(toJSON(github.event.commits.*.message), '[skip-ci]')"
Expand Down
9 changes: 9 additions & 0 deletions .vale.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
StylesPath = "styles"
MinAlertLevel = warning

Vocab = "Stakater"

# Only check MarkDown files
[*.md]

BasedOnStyles = Vale
4 changes: 2 additions & 2 deletions docs/content/sre/addons/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ Logging | ElasticSearch, Fluentd, Kibana
Monitoring | Prometheus, Grafana
CI (continuous integration) | Tekton
CD (continuous delivery) | ArgoCD
Internal alerting | AlertManager
Internal alerting | Alertmanager
Service mesh | Istio, Kiali, Jaeger (only one fully managed control plane)
Image scanning | Trivy
Backups & Recovery | Velero
SSO (for managed addons) | KeyCloak
SSO (for managed addons) | Keycloak
Secrets management | Vault
Artifacts management | Nexus
Code inspection | SonarQube
Expand Down
12 changes: 6 additions & 6 deletions docs/content/sre/alerting/downtime-notifications-uptimerobot.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# External downtime alerting

Stakater App Agility Platform provides downtime notifications for Applications via [IngressMonitorController](https://github.com/stakater/IngressMonitorController) which out of the box integrates with [UptimeRobot](https://uptimerobot.com) and many other services. For this guide we will configure a slack channel for recieving the alerts; but you can can configure any medium supported by the service (email, pagerduty, etc.).
Stakater App Agility Platform provides downtime notifications for Applications via [IngressMonitorController](https://github.com/stakater/IngressMonitorController) which out of the box integrates with [UptimeRobot](https://uptimerobot.com) and many other services. For this guide we will configure a slack channel for receiving the alerts; but you can configure any medium supported by the service (email, PagerDuty, etc.).

To configure downtime alerting do following:

1. Configure incoming webhook in slack
2. Create alert contact on uptimerobot with webhook
2. Create alert contact on UptimeRobot with webhook
3. Update IMC configuration
4. Enable EndpointMonitor in the application
5. Validate downtime notification
Expand All @@ -22,9 +22,9 @@ To configure downtime alerting do following:
### Items to be provided to Stakater Support
- `Incoming WebHook URL`

## 2. Create alert contact on uptimerobot with webhook
## 2. Create alert contact on UptimeRobot with webhook

Create alert contact on uptimerobot
Create alert contact on UptimeRobot

_TODO Add details with screen shots_

Expand All @@ -36,7 +36,7 @@ _TODO Add details with screen shots_

## 4. Enable EndpointMonitor in the application

Stakater helm application chart supports [endpointMonitor](https://github.com/stakater-charts/application/blob/master/application/values.yaml#L465-L475); just enable it i.e.
Stakater Helm application chart supports [`endpointMonitor`](https://github.com/stakater-charts/application/blob/master/application/values.yaml#L465-L475); just enable it i.e.

```
endpointMonitor:
Expand All @@ -45,7 +45,7 @@ endpointMonitor:

## 5. Validate downtime notification

Reduce replicas to zero; and you should recieve downtime notification!
Reduce replicas to zero; and you should receive downtime notification!

```
deployment:
Expand Down
12 changes: 6 additions & 6 deletions docs/content/sre/alerting/log-alerts.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Log alerting

Stakater App Agility Platform provides alerting for applications logs via [Konfigurator](https://github.com/stakater/Konfigurator) which out of the box integrates with fluentd. These alerts land on Slack channel(s) so that any Errors/Warnings can be responded immediately.
Stakater App Agility Platform provides alerting for applications logs via [Konfigurator](https://github.com/stakater/Konfigurator) which out of the box integrates with Fluentd. These alerts land on Slack channel(s) so that any Errors/Warnings can be responded immediately.

To configure log alerting do following:

1. Configure the incoming webhook in slack
2. Configure `FluentdConfigAnnotation` in application helm chart
2. Configure `FluentdConfigAnnotation` in application Helm chart

## 1. Configure the incoming webhook in slack

Expand All @@ -20,15 +20,15 @@ Always use Slack bot account to manage incoming webhooks. An integration/app mig
- After picking a channel or user to be notified, click the `Add Incoming WebHooks Integration` Button. The most important part on the next screen is the `WebHook URL`. Make sure you copy this URL and save it
- Near the bottom of this page, you may further customize the Incoming WebHook you just created. Give it a name, description and perhaps a custom icon.

## 2. Configure `FluentdConfigAnnotation` in application helm chart
## 2. Configure `FluentdConfigAnnotation` in application Helm chart

The configuration to parse/match/send logs can be specified in the [Application Chart](https://github.com/stakater-charts/application).

| Parameter | Description |
|:---|:---|
|.Values.deployment.fluentdConfigAnnotations.notifications.slack|specify slack *webhookURL* and *channelName*|
|.Values.deployment.fluentdConfigAnnotations.key|specify log field to match the regex|
|.Values.deployment.fluentdConfigAnnotations.pattern|specify regex to be matched|
|`.Values.deployment.fluentdConfigAnnotations.notifications.slack`|specify slack *`webhookURL`* and *`channelName`*|
|`.Values.deployment.fluentdConfigAnnotations.key`|specify log field to match the regex|
|`.Values.deployment.fluentdConfigAnnotations.pattern`|specify regex to be matched|

We recommend to log as JSON but for some reason if you can't then follow the next step as well.

Expand Down
4 changes: 2 additions & 2 deletions docs/content/sre/alerting/predefined-prometheusrules.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Predefined PrometheusRules

There are few pre-defined PrometheusRules that come with the platfrom. You can use existing existing rules to forward alerts to your prefferred medium of choice.
There are few pre-defined PrometheusRules that come with the platform. You can use existing rules to forward alerts to your preferred medium of choice.

Following are the rules along their descriptions
Following are the rules along their descriptions.

## Kubernetes Apps

Expand Down
22 changes: 11 additions & 11 deletions docs/content/sre/alerting/workload-application-alerts.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Internal alerting

Stakater App Agility Platform also provides fully managed dedicated workload monitoring stack based on Prometheus, AlertManager and Grafana.
Stakater App Agility Platform also provides fully managed dedicated workload monitoring stack based on Prometheus, Alertmanager and Grafana.

To configure alerting for your application do following:

Expand All @@ -14,14 +14,14 @@ To configure alerting for your application do following:

Service Monitor uses the service that is used by your application. Then Service Monitor scrapes metrics via that service.

You need to define ServiceMonitor so, the application metrics can be scrapped.
You need to define `ServiceMonitor` so, the application metrics can be scrapped.

ServiceMonitor can be enabled in [Application Chart](https://github.com/stakater-charts/application).
`ServiceMonitor` can be enabled in [Application Chart](https://github.com/stakater-charts/application).

| Parameter | Description |
|:---|:---|
| .Values.serviceMonitor.enabled | Enable serviceMonitor
| .Values.serviceMonitor.endpoints | Array of endpoints to be scraped by prometheus
| `.Values.serviceMonitor.enabled` | Enable `ServiceMonitor`
| `.Values.serviceMonitor.endpoints` | Array of endpoints to be scraped by Prometheus

```
serviceMonitor:
Expand All @@ -32,9 +32,9 @@ serviceMonitor:
port: http
```

## 2. Create AlertmanagerConfig for the applicaiton
## 2. Create AlertmanagerConfig for the application

You need to define AlertmanagerConfig to direct alerts to your target alerting medium like slack, pagetduty, etc.
You need to define AlertmanagerConfig to direct alerts to your target alerting medium like Slack, PagerDuty, etc.

A sample AlertmanagerConfig can be configured in [Application Chart](https://github.com/stakater-charts/application).

Expand All @@ -44,9 +44,9 @@ A sample AlertmanagerConfig can be configured in [Application Chart](https://git
| .Values.alertmanagerConfig.spec.route | The Alertmanager route definition for alerts matching the resource’s namespace. It will be added to the generated Alertmanager configuration as a first-level route
| .Values.alertmanagerConfig.spec.receivers | List of receivers

We will use slack as an example here.
We will use Slack as an example here.

Step 1: Create a `slack-webhook-config` secret which holds slack webhook-url
Step 1: Create a `slack-webhook-config` secret which holds Slack webhook URL

```
kind: Secret
Expand All @@ -59,7 +59,7 @@ data:
type: Opaque
```

Step 2: Add a alertmanagerConfig spec to use `slack-webhook-config` secret created above in step 1, you need to replace `<workload-alertmanager-url>` with the link of Workload Alertmanager that you can get from forecastle.
Step 2: Add a alertmanagerConfig spec to use `slack-webhook-config` secret created above in step 1, you need to replace `<workload-alertmanager-url>` with the link of Workload Alertmanager that you can get from Forecastle.

```
alertmanagerConfig:
Expand Down Expand Up @@ -105,7 +105,7 @@ AlertmanagerConfig will add a match with your namespace name by default, which w

## 3. [Optional] Create PrometheusRule for the application

Stakater App Agility Platforms comes with lots of [Predefined PrometheusRules](./predefined-prometheusrules.md) which covers most of the commmon use cases.
Stakater App Agility Platforms comes with lots of [Predefined PrometheusRules](./predefined-prometheusrules.md) which covers most of the common use cases.

If required you can definitely create a new PrometheusRule to define for defining alerting rule.

Expand Down
Loading

0 comments on commit deba14c

Please sign in to comment.