-
Notifications
You must be signed in to change notification settings - Fork 180
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Custom listeners [skip secret scan] (#101)
* mtlsmoreacls * rebase * custom listeners * custom listeners * custom listeners
- Loading branch information
1 parent
af3ea4d
commit ab6c694
Showing
3 changed files
with
375 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,193 @@ | ||
Deploy Confluent Platform | ||
========================= | ||
|
||
In this workflow scenario, you'll set up a simple non-secure (no authn, authz or | ||
encryption) Confluent Platform, consisting of all components. | ||
|
||
You will also configure additional custom listeners. | ||
|
||
The goal for this scenario is for you to: | ||
|
||
* Quickly set up the complete Confluent Platform on the Kubernetes. | ||
* Add additional broker listeners. | ||
* Configure a producer to generate sample data. | ||
|
||
|
||
Before continuing with the scenario, ensure that you have set up the | ||
`prerequisites </README.md#prerequisites>`_. | ||
|
||
To complete this scenario, you'll follow these steps: | ||
|
||
#. Set the current tutorial directory. | ||
|
||
#. Deploy Confluent For Kubernetes. | ||
|
||
#. Deploy Confluent Platform. | ||
|
||
#. Deploy the Producer application. | ||
|
||
#. Tear down Confluent Platform. | ||
|
||
================================== | ||
Set the current tutorial directory | ||
================================== | ||
|
||
Set the tutorial directory for this tutorial under the directory you downloaded | ||
the tutorial files: | ||
|
||
:: | ||
export TUTORIAL_HOME=<Tutorial directory>/kafka-additional-listeners | ||
|
||
=============================== | ||
Deploy Confluent for Kubernetes | ||
=============================== | ||
|
||
#. Set up the Helm Chart: | ||
|
||
:: | ||
|
||
helm repo add confluentinc https://packages.confluent.io/helm | ||
|
||
|
||
#. Install Confluent For Kubernetes using Helm: | ||
|
||
:: | ||
|
||
helm upgrade --install operator confluentinc/confluent-for-kubernetes | ||
#. Check that the Confluent For Kubernetes pod comes up and is running: | ||
|
||
:: | ||
kubectl get pods | ||
|
||
======================================== | ||
Review Confluent Platform configurations | ||
======================================== | ||
|
||
You install Confluent Platform components as custom resources (CRs). | ||
|
||
You can configure all Confluent Platform components as custom resources. In this | ||
tutorial, you will configure all components in a single file and deploy all | ||
components with one ``kubectl apply`` command. | ||
|
||
The entire Confluent Platform is configured in one configuration file: | ||
``$TUTORIAL_HOME/confluent-platform.yaml`` | ||
|
||
In this configuration file, there is a custom Resource configuration spec for | ||
each Confluent Platform component - replicas, image to use, resource | ||
allocations. | ||
|
||
|
||
========================= | ||
Deploy Confluent Platform | ||
========================= | ||
|
||
#. Deploy Confluent Platform with the above configuration: | ||
|
||
:: | ||
|
||
kubectl apply -f $TUTORIAL_HOME/confluent-platform.yaml | ||
#. Check that all Confluent Platform resources are deployed: | ||
|
||
:: | ||
kubectl get confluent | ||
|
||
#. Get the status of any component. For example, to check Kafka: | ||
|
||
:: | ||
kubectl describe kafka | ||
|
||
======== | ||
Validate | ||
======== | ||
|
||
Deploy producer application | ||
^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
|
||
Now that we've got the infrastructure set up, let's deploy the producer client | ||
app. | ||
|
||
The producer app is packaged and deployed as a pod on Kubernetes. The required | ||
topic is defined as a KafkaTopic custom resource in | ||
``$TUTORIAL_HOME/secure-producer-app-data.yaml``. | ||
|
||
The ``$TUTORIAL_HOME/secure-producer-app-data.yaml`` defines the ``elastic-0`` | ||
topic as follows: | ||
|
||
:: | ||
|
||
apiVersion: platform.confluent.io/v1beta1 | ||
kind: KafkaTopic | ||
metadata: | ||
name: elastic-0 | ||
namespace: confluent | ||
spec: | ||
replicas: 3 # change to 1 if using single node | ||
partitionCount: 1 | ||
configs: | ||
cleanup.policy: "delete" | ||
Deploy the producer app: | ||
|
||
``kubectl apply -f $TUTORIAL_HOME/producer-app-data.yaml`` | ||
|
||
Validate in Control Center | ||
^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
|
||
Use Control Center to monitor the Confluent Platform, and see the created topic and data. | ||
|
||
#. Set up port forwarding to Control Center web UI from local machine: | ||
|
||
:: | ||
|
||
kubectl port-forward controlcenter-0 9021:9021 | ||
|
||
#. Browse to Control Center: | ||
|
||
:: | ||
http://localhost:9021 | ||
|
||
#. Check that the ``elastic-0`` topic was created and that messages are being produced to the topic. | ||
|
||
Review the additional listeners | ||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
|
||
Review Kafka ConfigMap to check the additional custom listeners | ||
|
||
:: | ||
|
||
kubectl get configmap kafka-shared-config -o jsonpath="{.data.kafka\.properties}" | grep -i listener | ||
|
||
You should see the following, indicating the new listeners | ||
|
||
:: | ||
|
||
inter.broker.listener.name=REPLICATION | ||
listener.security.protocol.map=CUSTOMLISTENER1:PLAINTEXT,CUSTOMLISTENER2:PLAINTEXT,EXTERNAL:PLAINTEXT,INTERNAL:PLAINTEXT,REPLICATION:PLAINTEXT | ||
listeners=CUSTOMLISTENER1://:9204,CUSTOMLISTENER2://:9205,EXTERNAL://:9092,INTERNAL://:9071,REPLICATION://:9072 | ||
|
||
|
||
========= | ||
Tear Down | ||
========= | ||
|
||
Shut down Confluent Platform and the data: | ||
|
||
:: | ||
|
||
kubectl delete -f $TUTORIAL_HOME/producer-app-data.yaml | ||
|
||
:: | ||
|
||
kubectl delete -f $TUTORIAL_HOME/confluent-platform.yaml | ||
|
||
:: | ||
|
||
helm delete operator | ||
114 changes: 114 additions & 0 deletions
114
networking/kafka-additional-listeners/confluent-platform.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,114 @@ | ||
--- | ||
apiVersion: platform.confluent.io/v1beta1 | ||
kind: Zookeeper | ||
metadata: | ||
name: zookeeper | ||
namespace: confluent | ||
spec: | ||
replicas: 3 | ||
image: | ||
application: confluentinc/cp-zookeeper:7.0.1 | ||
init: confluentinc/confluent-init-container:2.2.0 | ||
dataVolumeCapacity: 10Gi | ||
logVolumeCapacity: 10Gi | ||
--- | ||
apiVersion: platform.confluent.io/v1beta1 | ||
kind: Kafka | ||
metadata: | ||
name: kafka | ||
namespace: confluent | ||
spec: | ||
replicas: 3 | ||
image: | ||
application: confluentinc/cp-server:7.0.1 | ||
init: confluentinc/confluent-init-container:2.2.0 | ||
dataVolumeCapacity: 10Gi | ||
listeners: | ||
#internal: # This will be configured by default | ||
#external: # This will be configured by default | ||
custom: | ||
- name: customlistener1 | ||
port: 9204 | ||
- name: customlistener2 | ||
port: 9205 | ||
# - name: customlistener-sasl-plain | ||
# port: 9206 | ||
# authentication: | ||
# type: plain | ||
# jaasConfig: | ||
# secretRef: credential | ||
# externalAccess: | ||
# type: nodePort | ||
# nodePort: | ||
# nodePortOffset: 30000 | ||
# host: host.example1.com | ||
# - name: customlistener-mtls | ||
# port: 9207 | ||
# authentication: | ||
# type: mtls | ||
# principalMappingRules: | ||
# - RULE:^CN=([a-zA-Z0-9]*).*$/$1/ | ||
# tls: | ||
# enabled: true | ||
# secretRef: tls-group1 | ||
metricReporter: | ||
enabled: true | ||
--- | ||
apiVersion: platform.confluent.io/v1beta1 | ||
kind: Connect | ||
metadata: | ||
name: connect | ||
namespace: confluent | ||
spec: | ||
replicas: 1 | ||
image: | ||
application: confluentinc/cp-server-connect:7.0.1 | ||
init: confluentinc/confluent-init-container:2.2.0 | ||
dependencies: | ||
kafka: | ||
bootstrapEndpoint: kafka:9071 | ||
|
||
--- | ||
apiVersion: platform.confluent.io/v1beta1 | ||
kind: KsqlDB | ||
metadata: | ||
name: ksqldb | ||
namespace: confluent | ||
spec: | ||
replicas: 1 | ||
image: | ||
application: confluentinc/cp-ksqldb-server:7.0.1 | ||
init: confluentinc/confluent-init-container:2.2.0 | ||
dataVolumeCapacity: 10Gi | ||
--- | ||
apiVersion: platform.confluent.io/v1beta1 | ||
kind: ControlCenter | ||
metadata: | ||
name: controlcenter | ||
namespace: confluent | ||
spec: | ||
replicas: 1 | ||
image: | ||
application: confluentinc/cp-enterprise-control-center:7.0.1 | ||
init: confluentinc/confluent-init-container:2.2.0 | ||
dataVolumeCapacity: 10Gi | ||
dependencies: | ||
schemaRegistry: | ||
url: http://schemaregistry.confluent.svc.cluster.local:8081 | ||
ksqldb: | ||
- name: ksqldb | ||
url: http://ksqldb.confluent.svc.cluster.local:8088 | ||
connect: | ||
- name: connect | ||
url: http://connect.confluent.svc.cluster.local:8083 | ||
--- | ||
apiVersion: platform.confluent.io/v1beta1 | ||
kind: SchemaRegistry | ||
metadata: | ||
name: schemaregistry | ||
namespace: confluent | ||
spec: | ||
replicas: 3 | ||
image: | ||
application: confluentinc/cp-schema-registry:7.0.1 | ||
init: confluentinc/confluent-init-container:2.2.0 |
68 changes: 68 additions & 0 deletions
68
networking/kafka-additional-listeners/producer-app-data.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
apiVersion: v1 | ||
kind: Secret | ||
metadata: | ||
name: kafka-client-config | ||
namespace: confluent | ||
type: Opaque | ||
data: | ||
kafka.properties: Ym9vdHN0cmFwLnNlcnZlcnM9a2Fma2EuY29uZmx1ZW50LnN2Yy5jbHVzdGVyLmxvY2FsOjkwNzEKc2VjdXJpdHkucHJvdG9jb2w9UExBSU5URVhU | ||
--- | ||
apiVersion: apps/v1 | ||
kind: StatefulSet | ||
metadata: | ||
name: elastic | ||
spec: | ||
serviceName: elastic | ||
podManagementPolicy: Parallel | ||
replicas: 1 | ||
selector: | ||
matchLabels: | ||
app: elastic | ||
template: | ||
metadata: | ||
labels: | ||
app: elastic | ||
spec: | ||
containers: | ||
- name: elastic | ||
image: confluentinc/cp-kafka:latest | ||
command: | ||
- /bin/sh | ||
- -c | ||
- | | ||
kafka-producer-perf-test \ | ||
--topic elastic-0 \ | ||
--record-size 64 \ | ||
--throughput 1 \ | ||
--producer.config /mnt/kafka.properties \ | ||
--num-records 230400 | ||
volumeMounts: | ||
- name: kafka-properties | ||
mountPath: /mnt | ||
readOnly: true | ||
resources: | ||
requests: | ||
memory: 512Mi # 768Mi | ||
cpu: 500m # 1000m | ||
volumes: | ||
- name: kafka-properties # Create secret with name `kafka-client-config` with client configurations | ||
secret: | ||
secretName: kafka-client-config | ||
--- | ||
apiVersion: v1 | ||
kind: Service | ||
metadata: | ||
name: elastic | ||
spec: | ||
clusterIP: None | ||
--- | ||
apiVersion: platform.confluent.io/v1beta1 | ||
kind: KafkaTopic | ||
metadata: | ||
name: elastic-0 | ||
namespace: confluent | ||
spec: | ||
replicas: 3 | ||
partitionCount: 1 | ||
configs: | ||
cleanup.policy: "delete" |