Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PMU data visualization example using Kafka and TimescaleDB #9

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,4 @@ More examples to come soon!

- [PMU Data Visualization](pmu-data-visualization)
- [Pyvolt DPsim Demo](pyvolt-dpsim-demo)
- [PMU/Kafka/TimescaleDB Data Visualization Demo](pmu-kafka-timescale-demo)
122 changes: 122 additions & 0 deletions pmu-kafka-timescale-demo/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# PMU/Kafka/TimescaleDB Data Visualization Demo

This repository contains deployment instructions and corresponding configuration files for the SOGNO Platform.
The core platform is based on the SOGNO platform.
We assume you have a full-fleged or light-weight kubernetes cluster up and running.
Please ensure you setup is in line with [this](https://sogno-platform.github.io/docs/getting-started/single-node/) base setup.

## Create Namespace
Create `demo` namespace where resources will be deployed
```bash
$ kubectl create namespace demo
```

## Container Registry credentials

This deployment requires an access to a container registry to pull and push Docker images. In order to do this, a secret with the registry credentials has to be created.

Modify `regcred-secret.yaml` with the appropriate credentials (the `data[.dockerconfigjson]` corresponds to the contents of `~/.docker/config.json` encoded in base64) and apply the secret manifest:
```bash
kubectl apply -f regcred-secret.yaml -n demo
```

## Visualization Stack

### Kafka/Strimzi Deployment

Deploy the Strimzi Cluster Operator
```bash
$ helm repo add strimzi https://strimzi.io/charts/
$ helm repo update
$ helm install strimzi strimzi/strimzi-kafka-operator -n demo
```

Deploy the Kafka Cluster
```bash
$ kubectl apply -f strimzi/strimzi-kafka-cluster.yaml -n demo
```

Wait for Cluster to be ready
```
$ kubectl wait kafka/strimzi-cluster -n demo --for=condition=Ready --timeout=300s
```

### Timescale Database

Add helm chart repo
```bash
$ helm repo add timescaledb 'https://raw.githubusercontent.com/timescale/timescaledb-kubernetes/master/charts/repo/'
```

Install helm chart
```bash
$ helm repo add timescaledb 'https://raw.githubusercontent.com/timescale/timescaledb-kubernetes/master/charts/repo/'
$ openssl req -x509 -sha256 -nodes -newkey rsa:4096 -days 3650 -subj "/CN=*.timescaledb.svc.cluster.local" -keyout tls.key -out tls.crt
$ kubectl create secret generic -n demo timescaledb-cluster-certificate --from-file=tls.crt=tls.crt --from-file=tls.key=tls.key
$ rm tls.crt tls.key
$ kubectl apply -f timescaledb/timescaledb-credentials-secret.yaml -n demo
$ helm install timescaledb-cluster timescaledb/timescaledb-single -n demo -f timescaledb/timescaledb-values.yaml
```

Directly execute a psql session on the master node
```bash
$ MASTERPOD="$(kubectl get pod -o name --namespace demo -l release=timescaledb-cluster)"
$ kubectl exec -i --tty --namespace demo ${MASTERPOD} -- psql -U postgres
```

Create a database named kafka with user kafka and grant access
```sql
> CREATE DATABASE kafka;
> CREATE ROLE kafka WITH LOGIN SUPERUSER PASSWORD 'kafka';
> GRANT ALL PRIVILEGES ON DATABASE kafka TO kafka;
```

### Kafka Connect/Kafka Connector

Modify the container image registry URL at `kafka-connect/kafka-connect.yaml` and apply the Kafka Connect manifest
```bash
$ kubectl apply -f kafka-connect/kafka-connect.yaml -n demo
```

Apply the Kafka Sink Connector manifest
```bash
$ kubectl apply -f kafka-connect/kafka-sink-connector.yaml -n demo
```

### PMU Simulation
Modify the template at `pmu-dummy/template-configmap.yaml` if necessary and apply the configmap manifest
```bash
$ kubectl apply -f pmu-dummy/template-configmap.yaml -n demo
```

Modify the environment variables at `pmu-dummy/deployment.yaml` and apply the deployment manifest
```bash
$ kubectl apply -f pmu-dummy/deployment.yaml -n demo
```

### Kafka Streams

Modify the environment variables at `kafka-streams/kafka-streams-deployment.yaml` and apply the deployment manifest
```bash
$ kubectl apply -f kafka-streams/kafka-streams-deployment.yaml -n demo
```

### Grafana

Adjust the host url inside */visualization/grafana_values.yaml* for the Ingress component and install the helm repo
```bash
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm install grafana grafana/grafana -f visualization/grafana_values.yaml -n demo
```

Apply dashboard configmap
```bash
$ kubectl apply -f visualization/dashboard-configmap.yaml -n demo
```

Get admin password
```bash
$ kubectl get secret -n demo grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
```

Access the url provided in the ingress component in a web browser to visualize the data
45 changes: 45 additions & 0 deletions pmu-kafka-timescale-demo/demo-setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#!/bin/bash

namespace=demo

kubectl create namespace $namespace

kubectl apply -f regcred-secret.yaml -n $namespace

# Strimzi
helm repo add strimzi https://strimzi.io/charts/
helm repo update
helm install strimzi strimzi/strimzi-kafka-operator -n $namespace
kubectl apply -f strimzi/strimzi-kafka-cluster.yaml -n $namespace
kubectl wait kafka/strimzi-cluster -n $namespace --for=condition=Ready --timeout=300s

# Timescale
helm repo add timescaledb 'https://raw.githubusercontent.com/timescale/timescaledb-kubernetes/master/charts/repo/'
openssl req -x509 -sha256 -nodes -newkey rsa:4096 -days 3650 -subj "/CN=*.timescaledb.svc.cluster.local" -keyout tls.key -out tls.crt
kubectl create secret generic -n $namespace timescaledb-cluster-certificate --from-file=tls.crt=tls.crt --from-file=tls.key=tls.key
rm tls.crt tls.key
kubectl apply -f timescaledb/timescaledb-credentials-secret.yaml -n $namespace
helm install timescaledb-cluster timescaledb/timescaledb-single -n $namespace -f timescaledb/timescaledb-values.yaml
kubectl wait pod/timescaledb-cluster-0 -n $namespace --for=condition=Ready --timeout=60s
kubectl exec timescaledb-cluster-0 -n $namespace -- psql -U postgres -c 'CREATE DATABASE kafka;'
kubectl exec timescaledb-cluster-0 -n $namespace -- psql -U postgres -c "CREATE ROLE kafka WITH LOGIN SUPERUSER PASSWORD 'kafka';"
kubectl exec timescaledb-cluster-0 -n $namespace -- psql -U postgres -c 'GRANT ALL PRIVILEGES ON DATABASE kafka TO kafka;'

# Kafka Connect
kubectl apply -f kafka-connect/kafka-connect.yaml -n $namespace

# Kafka Connector
kubectl apply -f kafka-connect/kafka-sink-connector.yaml -n $namespace

# PMU-dummy
kubectl apply -f pmu-dummy/template-configmap.yaml -n $namespace
kubectl apply -f pmu-dummy/deployment.yaml -n $namespace

# Kafka Streams
kubectl apply -f kafka-streams/kafka-streams-deployment.yaml -n $namespace

# Grafana
helm repo add grafana https://grafana.github.io/helm-charts
helm install grafana grafana/grafana -f visualization/grafana_values.yaml -n $namespace
kubectl apply -f visualization/dashboard-configmap.yaml -n $namespace
kubectl get secret -n $namespace grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
42 changes: 42 additions & 0 deletions pmu-kafka-timescale-demo/kafka-connect/kafka-connect.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: kafka-connect-cluster
annotations:
strimzi.io/use-connector-resources: "true"
spec:
replicas: 1
bootstrapServers: strimzi-cluster-kafka-bootstrap.demo:9092
config:
group.id: kafka-connect-cluster
offset.storage.topic: kafka-connect-cluster-offsets
config.storage.topic: kafka-connect-cluster-configs
status.storage.topic: kafka-connect-cluster-status
key.converter: org.apache.kafka.connect.json.JsonConverter
value.converter: org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable: true
value.converter.schemas.enable: true
config.storage.replication.factor: 1
offset.storage.replication.factor: 1
status.storage.replication.factor: 1
config.providers: file
config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
externalConfiguration:
volumes:
- name: timescaledb-cluster-credentials
secret:
secretName: timescaledb-cluster-credentials
build:
output:
type: docker
image: registry.example.com/pmu-kafka-timescale-demo/kafka-connect-cluster
pushSecret: regcred
plugins:
- name: confluent-postgres-connector
artifacts:
- type: zip
url: https://d1i4a15mxbxib1.cloudfront.net/api/plugins/confluentinc/kafka-connect-jdbc/versions/10.2.0/confluentinc-kafka-connect-jdbc-10.2.0.zip
template:
pod:
imagePullSecrets:
- name: regcred
15 changes: 15 additions & 0 deletions pmu-kafka-timescale-demo/kafka-connect/kafka-sink-connector.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
name: kafka-sink-connector
labels:
strimzi.io/cluster: kafka-connect-cluster
spec:
class: io.confluent.connect.jdbc.JdbcSinkConnector
tasksMax: 1
config:
topics: pmu-dummy-out
connection.url: jdbc:postgresql://timescaledb-cluster.demo.svc.cluster.local:5432/kafka
connection.user: kafka
connection.password: kafka
auto.create: true
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kstreams
spec:
replicas: 1
selector:
matchLabels:
app: kstreams
template:
metadata:
labels:
app: kstreams
spec:
containers:
- name: kstreams
image: registry.example.com/pmu-kafka-timescale-demo/kafka-connect-cluster/pmu-streams-connector:latest
env:
- name: KAFKA_BROKER
value: strimzi-cluster-kafka-bootstrap.demo:9092
- name: INPUT_TOPIC
value: pmu-dummy-in
- name: OUTPUT_TOPIC
value: pmu-dummy-out
- name: APP_ID
value: pmu-dummy-app
imagePullSecrets:
- name: regcred
58 changes: 58 additions & 0 deletions pmu-kafka-timescale-demo/pmu-dummy/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: pmu-dummy
labels:
app: pmu-dummy
spec:
selector:
matchLabels:
app: pmu-dummy
replicas: 1
strategy: {}
template:
metadata:
labels:
app: pmu-dummy
spec:
containers:
- name: pmu-dummy
image: registry.example.com/pmu-kafka-timescale-demo/kafka-connect-cluster/pmu-dummy:latest
volumeMounts:
- name: pmu-dummy-template-config
mountPath: /usr/src/app/device_template.json
subPath: device_template.json
env:
#Config for both MQTT and Kafka
- name: JSON_TEMPLATE
value: /usr/src/app/device_template.json #path for the data template file (/usr/src/app/ is where the Docker image alocates the application files)
- name: PRODUCER_TYPE
value: kafka #"mqtt" or "kafka"
- name: BROKER_URL
value: strimzi-cluster-kafka-bootstrap.demo #FQDN or IP
- name: BROKER_PORT
value: "9092"
- name: TOPIC_NAME
value: pmu-dummy-in
#MQTT Specific Config
- name: MQTT_USER
value: admin #USERNAME
- name: MQTT_PWD
value: admin #PASSWORD
- name: MQTT_SSL
value: "false" #toggle ssl true/false
- name: MQTT_CAFILE
value: "/etc/ssl/certs/DST_Root_CA_X3.pem" #example ca file for let's Encrypt
- name: MQTT_DEVICE_NAME
value: device1-dummy
ports:
- containerPort: 3000
volumes:
- name: pmu-dummy-template-config
configMap:
name: pmu-dummy-template-config
items:
- key: device_template.json
path: device_template.json
imagePullSecrets:
- name: regcred
63 changes: 63 additions & 0 deletions pmu-kafka-timescale-demo/pmu-dummy/template-configmap.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: pmu-dummy-template-config
data:
device_template.json: |
{
"schema": {
"type": "struct",
"fields": [
{
"type": "string",
"optional": false,
"field": "device"
},
{
"type": "int64",
"optional": false,
"field": "timestamp"
},
{
"type": "string",
"optional": false,
"field": "component"
},
{
"type": "string",
"optional": false,
"field": "measurand"
},
{
"type": "string",
"optional": false,
"field": "phase"
},
{
"type": "float",
"optional": true,
"field": "data"
}
],
"optional": false,
"name": "pmu-dummy-schema"
},
"payload": {
"device": "device1",
"timestamp": "TIMESTAMP",
"readings": [
{
"component": "BUS1",
"measurand": "voltmagnitude",
"phase": "A",
"data": "RANDOM"
},
{
"component": "BUS2",
"measurand": "voltmagnitude",
"phase": "B",
"data": "RANDOM"
}
]
}
}
7 changes: 7 additions & 0 deletions pmu-kafka-timescale-demo/regcred-secret.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
apiVersion: v1
kind: Secret
metadata:
name: regcred
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: YmFzZTY0IG9mIH4vLmRvY2tlci9jb25maWcuanNvbg==
Loading