Skip to content

Commit

Permalink
Merge pull request #1892 from tisnik/updated-doc
Browse files Browse the repository at this point in the history
Updated doc before we change it with info about new consumer and new storage
  • Loading branch information
tisnik authored Nov 29, 2023
2 parents d8a9064 + e21fc4a commit 35c9bab
Show file tree
Hide file tree
Showing 7 changed files with 63 additions and 27 deletions.
21 changes: 21 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ Aggregator service for insights results
* [Description](#description)
* [Documentation](#documentation)
* [Makefile targets](#makefile-targets)
* [Usage](#usage)
* [BDD tests](#bdd-tests)
* [Package manifest](#package-manifest)

Expand Down Expand Up @@ -69,6 +70,26 @@ help Show this help screen
function_list List all functions in generated binary file
```

## Usage

```
Usage:
./insights-results-aggregator [command]
The commands are:
<EMPTY> starts aggregator
start-service starts aggregator
help prints help
print-help prints help
print-config prints current configuration set by files & env variables
print-env prints env variables
print-version-info prints version info
migration prints information about migrations (current, latest)
migration <version> migrates database to the specified version
```

## BDD tests

Behaviour tests for this service are included in [Insights Behavioral
Expand Down
45 changes: 26 additions & 19 deletions docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,30 +6,37 @@ nav_order: 1

Aggregator service consists of three main parts:

1. Consumer that reads (consumes) Insights OCP messages from specified message broker. Usually Kafka
broker is used but it might be possible to develop a interface for different broker. Insights
2. OCP messages are basically encoded in JSON and contain results generated by rule engine.
3. HTTP or HTTPS server that exposes REST API endpoints that can be used to read list of
organizations, list of clusters, read rules results for selected cluster etc. Additionally,
basic metrics are exposed as well. Those metrics is configured to be consumed by Prometheus and
visualized by Grafana.
4. Storage backend which is some instance of SQL database. Currently SQLite3 and PostgreSQL are
fully supported, but more SQL databases might be added later.
1. Consumer that reads (consumes) Insights OCP messages from specified message
broker. Usually Kafka broker is used but it might be possible to develop a
interface for different broker. Insights OCP messages are basically encoded
in JSON and contain results generated by rule engine. Different consumer can
be selected to consume and process DVO Recommendations.
2. HTTP or HTTPS server that exposes REST API endpoints that can be used to
read list of organizations, list of clusters, read rules results for
selected cluster etc. Additionally, basic metrics are exposed as well. Those
metrics is configured to be consumed by Prometheus and visualized by
Grafana.
3. Storage backend which is some instance of SQL database or Redis storage.
Currently only PostgreSQL is fully supported, but more SQL databases might
be added later.

## Whole data flow

![data_flow]({{ "assets/customer-facing-services-architecture.png" | relative_url}})

1. Event about new data from insights operator is consumed from Kafka. That event contains (among
other things) URL to S3 Bucket
2. Insights operator data is read from S3 Bucket and Insights rules are applied to that data
3. Results (basically organization ID + cluster name + insights results JSON) are stored back into
Kafka, but into different topic
4. That results are consumed by Insights rules aggregator service that caches them
5. The service provides such data via REST API to other tools, like OpenShift Cluster Manager web
UI, OpenShift console, etc.

Optionally, an organization allowlist can be enabled by the configuration variable
1. Event about new data from insights operator is consumed from Kafka. That
event contains (among other things) URL to S3 Bucket
2. Insights operator data is read from S3 Bucket and Insights OCP rules are
applied to that data. Alternatively DVO rules are applied to the same data.
3. Results (basically organization ID + cluster name + insights OCP
recommendations JSON or DVO recommendations) are stored back into Kafka, but
into different topic
4. That results are consumed by Insights rules aggregator service that caches
them (i.e. stores them into selected database).
5. The service provides such data via REST API to other tools, like OpenShift
Cluster Manager web UI, OpenShift console, etc.

Optionally, a so called organization allowlist can be enabled by the configuration variable
`enable_org_allowlist`, which enables processing of a .csv file containing organization IDs (path
specified by the config variable `org_allowlist`) and allows report processing only for these
organizations. This feature is disabled by default, and might be removed altogether in the near
Expand Down
1 change: 1 addition & 0 deletions docs/ci.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ cyclomatic complexity > 9
* `abcgo` to measure ABC metrics for Go source code and check if the metrics does not exceed
specified threshold
* `golangci-lint` as Go linters aggregator with lot of linters enabled: https://golangci-lint.run/usage/linters/
* BDD tests that checks the overall Insights Results Aggregator behaviour.

Please note that all checks mentioned above have to pass for the change to be merged into master
branch.
Expand Down
10 changes: 7 additions & 3 deletions docs/clowder.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ nav_order: 3

# Clowder configuration

As the rest of the services deployed in the Console RedHat platform, the
As the rest of the services deployed in the Console Red Hat platform, the
Insights Results Aggregator DB Writer should update its configuration
using the relevant values extracted from the Clowder configuration file.

Expand All @@ -27,10 +27,12 @@ configuration.

# Insights Results Aggregator specific relevant values

This service is running in 2 different modes in the platform:
This service is running in 3 different modes in the platform:

- DB Writer: the service connects to Kafka to receive messages in a
specific topic and write the results in a database.
specific topic and write the results into a SQL database.
- Cache Writer: the service connects to Kafka to receive messages in a
specific topic and write the results into Redis.
- Results Aggregator: expose the database stored data into several API
endpoints.

Expand All @@ -39,4 +41,6 @@ different:

- DB Writer needs to update its Kafka access configuration and its DB
access configuration in order to work.
- Cache Writer needs to update its Kafka access configuration and its DB
access configuration in order to work.
- Results Aggregator just need to update its DB access configuration.
2 changes: 1 addition & 1 deletion docs/db_retention_policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ layout: page

## List of tables

All tables that are stored in external data pipeline database:
All tables that are stored in external data pipeline database (OCP Recommendations):

```
Schema | Name | Type
Expand Down
7 changes: 3 additions & 4 deletions docs/documentation_for_developers.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,9 @@ nav_order: 16
All packages developed in this project have documentation available on [GoDoc server](https://godoc.org/):

* [entry point to the service](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator)
* [package `broker`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/broker)
* [package `consumer`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/consumer)
* [package `content`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/content)
* [package `logger`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/logger)
* [package `broker`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/conf)
* [package `conf`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/consumer)
* [package `consumer`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/content)
* [package `metrics`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/metrics)
* [package `migration`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/migration)
* [package `producer`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/producer)
Expand Down
4 changes: 4 additions & 0 deletions docs/references.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,9 @@ layout: page
nav_order: 18
---
# References
- [Smart Proxy](https://github.com/RedHatInsights/smart-proxy)
- [Insights Data Schemas](https://redhatinsights.github.io/insights-data-schemas/)
- [Insights Results Aggregator Data](https://github.com/RedHatInsights/insights-results-aggregator-data)
- [Insights Results Aggregator Cleaner](https://github.com/RedHatInsights/insights-results-aggregator-cleaner)
- [Insights Results Aggregator Exporter](https://github.com/RedHatInsights/insights-results-aggregator-exporter)
- [Insights Content Service](https://github.com/RedHatInsights/insights-content-service)

0 comments on commit 35c9bab

Please sign in to comment.