diff --git a/README.md b/README.md index 6285577f..a0be1acb 100644 --- a/README.md +++ b/README.md @@ -18,6 +18,7 @@ Aggregator service for insights results * [Description](#description) * [Documentation](#documentation) * [Makefile targets](#makefile-targets) +* [Usage](#usage) * [BDD tests](#bdd-tests) * [Package manifest](#package-manifest) @@ -69,6 +70,26 @@ help Show this help screen function_list List all functions in generated binary file ``` +## Usage + +``` +Usage: + + ./insights-results-aggregator [command] + +The commands are: + + starts aggregator + start-service starts aggregator + help prints help + print-help prints help + print-config prints current configuration set by files & env variables + print-env prints env variables + print-version-info prints version info + migration prints information about migrations (current, latest) + migration migrates database to the specified version +``` + ## BDD tests Behaviour tests for this service are included in [Insights Behavioral diff --git a/docs/architecture.md b/docs/architecture.md index 20f51727..55f17eca 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -6,30 +6,37 @@ nav_order: 1 Aggregator service consists of three main parts: -1. Consumer that reads (consumes) Insights OCP messages from specified message broker. Usually Kafka -broker is used but it might be possible to develop a interface for different broker. Insights -2. OCP messages are basically encoded in JSON and contain results generated by rule engine. -3. HTTP or HTTPS server that exposes REST API endpoints that can be used to read list of -organizations, list of clusters, read rules results for selected cluster etc. Additionally, -basic metrics are exposed as well. Those metrics is configured to be consumed by Prometheus and -visualized by Grafana. -4. Storage backend which is some instance of SQL database. Currently SQLite3 and PostgreSQL are -fully supported, but more SQL databases might be added later. +1. Consumer that reads (consumes) Insights OCP messages from specified message + broker. Usually Kafka broker is used but it might be possible to develop a + interface for different broker. Insights OCP messages are basically encoded + in JSON and contain results generated by rule engine. Different consumer can + be selected to consume and process DVO Recommendations. +2. HTTP or HTTPS server that exposes REST API endpoints that can be used to + read list of organizations, list of clusters, read rules results for + selected cluster etc. Additionally, basic metrics are exposed as well. Those + metrics is configured to be consumed by Prometheus and visualized by + Grafana. +3. Storage backend which is some instance of SQL database or Redis storage. + Currently only PostgreSQL is fully supported, but more SQL databases might + be added later. ## Whole data flow ![data_flow]({{ "assets/customer-facing-services-architecture.png" | relative_url}}) -1. Event about new data from insights operator is consumed from Kafka. That event contains (among -other things) URL to S3 Bucket -2. Insights operator data is read from S3 Bucket and Insights rules are applied to that data -3. Results (basically organization ID + cluster name + insights results JSON) are stored back into -Kafka, but into different topic -4. That results are consumed by Insights rules aggregator service that caches them -5. The service provides such data via REST API to other tools, like OpenShift Cluster Manager web -UI, OpenShift console, etc. - -Optionally, an organization allowlist can be enabled by the configuration variable +1. Event about new data from insights operator is consumed from Kafka. That + event contains (among other things) URL to S3 Bucket +2. Insights operator data is read from S3 Bucket and Insights OCP rules are + applied to that data. Alternatively DVO rules are applied to the same data. +3. Results (basically organization ID + cluster name + insights OCP + recommendations JSON or DVO recommendations) are stored back into Kafka, but + into different topic +4. That results are consumed by Insights rules aggregator service that caches + them (i.e. stores them into selected database). +5. The service provides such data via REST API to other tools, like OpenShift + Cluster Manager web UI, OpenShift console, etc. + +Optionally, a so called organization allowlist can be enabled by the configuration variable `enable_org_allowlist`, which enables processing of a .csv file containing organization IDs (path specified by the config variable `org_allowlist`) and allows report processing only for these organizations. This feature is disabled by default, and might be removed altogether in the near diff --git a/docs/ci.md b/docs/ci.md index 7c7791fc..64594bde 100644 --- a/docs/ci.md +++ b/docs/ci.md @@ -25,6 +25,7 @@ cyclomatic complexity > 9 * `abcgo` to measure ABC metrics for Go source code and check if the metrics does not exceed specified threshold * `golangci-lint` as Go linters aggregator with lot of linters enabled: https://golangci-lint.run/usage/linters/ +* BDD tests that checks the overall Insights Results Aggregator behaviour. Please note that all checks mentioned above have to pass for the change to be merged into master branch. diff --git a/docs/clowder.md b/docs/clowder.md index 76f480b4..6bda8322 100644 --- a/docs/clowder.md +++ b/docs/clowder.md @@ -5,7 +5,7 @@ nav_order: 3 # Clowder configuration -As the rest of the services deployed in the Console RedHat platform, the +As the rest of the services deployed in the Console Red Hat platform, the Insights Results Aggregator DB Writer should update its configuration using the relevant values extracted from the Clowder configuration file. @@ -27,10 +27,12 @@ configuration. # Insights Results Aggregator specific relevant values -This service is running in 2 different modes in the platform: +This service is running in 3 different modes in the platform: - DB Writer: the service connects to Kafka to receive messages in a - specific topic and write the results in a database. + specific topic and write the results into a SQL database. +- Cache Writer: the service connects to Kafka to receive messages in a + specific topic and write the results into Redis. - Results Aggregator: expose the database stored data into several API endpoints. @@ -39,4 +41,6 @@ different: - DB Writer needs to update its Kafka access configuration and its DB access configuration in order to work. +- Cache Writer needs to update its Kafka access configuration and its DB + access configuration in order to work. - Results Aggregator just need to update its DB access configuration. diff --git a/docs/db_retention_policy.md b/docs/db_retention_policy.md index 39531e72..0df5343b 100644 --- a/docs/db_retention_policy.md +++ b/docs/db_retention_policy.md @@ -16,7 +16,7 @@ layout: page ## List of tables -All tables that are stored in external data pipeline database: +All tables that are stored in external data pipeline database (OCP Recommendations): ``` Schema | Name | Type diff --git a/docs/documentation_for_developers.md b/docs/documentation_for_developers.md index 3aa1b58b..33d2b1b0 100644 --- a/docs/documentation_for_developers.md +++ b/docs/documentation_for_developers.md @@ -7,10 +7,9 @@ nav_order: 16 All packages developed in this project have documentation available on [GoDoc server](https://godoc.org/): * [entry point to the service](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator) -* [package `broker`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/broker) -* [package `consumer`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/consumer) -* [package `content`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/content) -* [package `logger`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/logger) +* [package `broker`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/conf) +* [package `conf`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/consumer) +* [package `consumer`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/content) * [package `metrics`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/metrics) * [package `migration`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/migration) * [package `producer`](https://godoc.org/github.com/RedHatInsights/insights-results-aggregator/producer) diff --git a/docs/references.md b/docs/references.md index b46395e7..3ab2d7af 100644 --- a/docs/references.md +++ b/docs/references.md @@ -3,5 +3,9 @@ layout: page nav_order: 18 --- # References +- [Smart Proxy](https://github.com/RedHatInsights/smart-proxy) - [Insights Data Schemas](https://redhatinsights.github.io/insights-data-schemas/) - [Insights Results Aggregator Data](https://github.com/RedHatInsights/insights-results-aggregator-data) +- [Insights Results Aggregator Cleaner](https://github.com/RedHatInsights/insights-results-aggregator-cleaner) +- [Insights Results Aggregator Exporter](https://github.com/RedHatInsights/insights-results-aggregator-exporter) +- [Insights Content Service](https://github.com/RedHatInsights/insights-content-service)