Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

migrate to new docker compose cmd #461

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 11 additions & 10 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''

title: ""
labels: ""
assignees: ""
---

**Description**
Expand All @@ -16,11 +15,13 @@ Validate every step in the troubleshooting section: https://docs.confluent.io/pl
Identify any existing issues that seem related: https://github.com/confluentinc/cp-demo/issues?q=is%3Aissue

If applicable, please include the output of:
- `docker-compose logs <container name>`
- any other relevant commands

- `docker-compose logs <container name>`
- any other relevant commands

**Environment**
- GitHub branch: [e.g. `6.0.1-post`, etc]
- Operating System:
- Version of Docker:
- Version of Docker Compose:

- GitHub branch: [e.g. `6.0.1-post`, etc]
- Operating System:
- Version of Docker:
- Version of docker-compose:
266 changes: 130 additions & 136 deletions docker-compose.yml → compose.yaml

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion scripts/sbc/add-broker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ source ${SBCDIR}/../env.sh

(cd $SBCDIR/../security && ./certs-create-per-user.sh kafka3) || exit 1

docker-compose -f $SBCDIR/../../docker-compose.yml -f $SBCDIR/docker-compose.yml up -d kafka3
docker compose -f $SBCDIR/../../compose.yaml -f $SBCDIR/compose.yaml up -d kafka3

# verify SBC responds with an add-broker balance plan
MAX_WAIT=120
Expand Down
20 changes: 9 additions & 11 deletions scripts/sbc/docker-compose.yml → scripts/sbc/compose.yaml
Original file line number Diff line number Diff line change
@@ -1,19 +1,18 @@
# docker-compose supports environment variable substitution with the ${VARIABLE-NAME} syntax.
# docker compose supports environment variable substitution with the ${VARIABLE-NAME} syntax.
# Environment variables can be sourced in a variety of ways. One of those ways is through
# a well known '.env' file located in the same folder as the docker-compose.yml file. See the Docker
# a well known '.env' file located in the same folder as the compose.yaml file. See the Docker
# documentation for details: https://docs.docker.com/compose/environment-variables/#the-env-file
#
#
# This feature is being used to parameterize some values within this file. In this directory is also
# a .env file, which is actually a symbolic link to <examples-root>/utils/config.env. That file
# contains values which get substituted here when docker-compose parses this file.
# contains values which get substituted here when docker compose parses this file.
#
# If you'd like to view the docker-compose.yml file rendered with its environment variable substitutions
# you can execute the `docker-compose config` command. Take note that some demos provide additional
# environment variable values by exporting them in a script prior to running `docker-compose up`.
# If you'd like to view the compose.yaml file rendered with its environment variable substitutions
# you can execute the `docker compose config` command. Take note that some demos provide additional
# environment variable values by exporting them in a script prior to running `docker compose up`.
---
version: "2.3"
services:

kafka3:
# Broker kafka3 is not started by-default in start scripts - it is used during the Self Balancing Cluster (SBC) demo
image: ${REPOSITORY}/cp-server:${CONFLUENT_DOCKER_TAG}
Expand All @@ -26,7 +25,7 @@ services:
- ./scripts/security/keypair:/tmp/conf
- ./scripts/helper:/tmp/helper
- ./scripts/security:/etc/kafka/secrets
command: "bash -c 'if [ ! -f /etc/kafka/secrets/kafka.kafka3.keystore.jks ]; then echo \"ERROR: Did not find SSL certificates in /etc/kafka/secrets/ (did you remember to run ./scripts/start.sh instead of docker-compose up -d?)\" && exit 1 ; else /etc/confluent/docker/run ; fi'"
command: "bash -c 'if [ ! -f /etc/kafka/secrets/kafka.kafka3.keystore.jks ]; then echo "ERROR: Did not find SSL certificates in /etc/kafka/secrets/ (did you remember to run ./scripts/start.sh instead of docker compose up -d?)" && exit 1 ; else /etc/confluent/docker/run ; fi'"
ports:
- 8093:8093
- 9093:9093
Expand Down Expand Up @@ -184,7 +183,6 @@ services:
KAFKA_KAFKA_REST_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.kafka3.truststore.jks
KAFKA_KAFKA_REST_SSL_TRUSTSTORE_PASSWORD: confluent
KAFKA_KAFKA_REST_CONFLUENT_METADATA_HTTP_AUTH_CREDENTIALS_PROVIDER: BASIC
KAFKA_KAFKA_REST_CONFLUENT_METADATA_BASIC_AUTH_USER_INFO: 'restAdmin:restAdmin'
KAFKA_KAFKA_REST_CONFLUENT_METADATA_BASIC_AUTH_USER_INFO: "restAdmin:restAdmin"
KAFKA_KAFKA_REST_CONFLUENT_METADATA_SERVER_URLS_MAX_AGE_MS: 60000
KAFKA_KAFKA_REST_CLIENT_CONFLUENT_METADATA_SERVER_URLS_MAX_AGE_MS: 60000

2 changes: 1 addition & 1 deletion scripts/sbc/validate_sbc_add_broker_completed.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ SBCDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
source ${SBCDIR}/../helper/functions.sh
source ${SBCDIR}/../env.sh

docker-compose -f $SBCDIR/../../docker-compose.yml -f $SBCDIR/docker-compose.yml logs kafka1 kafka2 | grep "COMPLETED.*databalancer"
docker-compose -f $SBCDIR/../../compose.yaml -f $SBCDIR/compose.yaml logs kafka1 kafka2 | grep "COMPLETED.*databalancer"
2 changes: 1 addition & 1 deletion scripts/sbc/validate_sbc_add_broker_plan_computation.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ SBCDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
source ${SBCDIR}/../helper/functions.sh
source ${SBCDIR}/../env.sh

docker-compose -f $SBCDIR/../../docker-compose.yml -f $SBCDIR/docker-compose.yml logs kafka1 kafka2 | grep "PLAN_COMPUTATION.*databalancer"
docker-compose -f $SBCDIR/../../compose.yaml -f $SBCDIR/compose.yaml logs kafka1 kafka2 | grep "PLAN_COMPUTATION.*databalancer"
2 changes: 1 addition & 1 deletion scripts/sbc/validate_sbc_add_broker_reassignment.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ SBCDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
source ${SBCDIR}/../helper/functions.sh
source ${SBCDIR}/../env.sh

docker-compose -f $SBCDIR/../../docker-compose.yml -f $SBCDIR/docker-compose.yml logs kafka1 kafka2 | grep "REASSIGNMENT.*databalancer"
docker-compose -f $SBCDIR/../../compose.yaml -f $SBCDIR/compose.yaml logs kafka1 kafka2 | grep "REASSIGNMENT.*databalancer"
4 changes: 2 additions & 2 deletions scripts/sbc/validate_sbc_kill_broker_completed.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@ SBCDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
source ${SBCDIR}/../helper/functions.sh
source ${SBCDIR}/../env.sh

docker-compose -f $SBCDIR/../../docker-compose.yml -f $SBCDIR/docker-compose.yml logs kafka1 kafka2 | grep "BROKER_FAILURE.*execution finishes" || exit 1
(docker-compose -f $SBCDIR/../../docker-compose.yml -f $SBCDIR/docker-compose.yml exec kafka1 kafka-replica-status --bootstrap-server kafka1:9091 --admin.config /etc/kafka/secrets/client_sasl_plain.config --verbose || exit 1) | grep "IsInIsr: false" && exit 1
docker-compose -f $SBCDIR/../../compose.yaml -f $SBCDIR/compose.yaml logs kafka1 kafka2 | grep "BROKER_FAILURE.*execution finishes" || exit 1
(docker-compose -f $SBCDIR/../../compose.yaml -f $SBCDIR/compose.yaml exec kafka1 kafka-replica-status --bootstrap-server kafka1:9091 --admin.config /etc/kafka/secrets/client_sasl_plain.config --verbose || exit 1) | grep "IsInIsr: false" && exit 1
exit 0
2 changes: 1 addition & 1 deletion scripts/sbc/validate_sbc_kill_broker_started.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ SBCDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
source ${SBCDIR}/../helper/functions.sh
source ${SBCDIR}/../env.sh

docker-compose -f $SBCDIR/../../docker-compose.yml -f $SBCDIR/docker-compose.yml logs kafka1 kafka2 | grep "BROKER_FAILURE.*started successfully"
docker-compose -f $SBCDIR/../../compose.yaml -f $SBCDIR/compose.yaml logs kafka1 kafka2 | grep "BROKER_FAILURE.*started successfully"
28 changes: 14 additions & 14 deletions scripts/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -46,24 +46,24 @@ fi
#-------------------------------------------------------------------------------

# Bring up openldap
docker-compose up --no-recreate -d openldap
docker compose up --no-recreate -d openldap
sleep 5
if [[ $(docker-compose ps openldap | grep Exit) =~ "Exit" ]] ; then
if [[ $(docker compose ps openldap | grep Exit) =~ "Exit" ]] ; then
echo "ERROR: openldap container could not start. Troubleshoot and try again. For troubleshooting instructions see https://docs.confluent.io/platform/current/tutorials/cp-demo/docs/troubleshooting.html"
exit 1
fi



# Bring up tools
docker-compose up --no-recreate -d tools
docker compose up --no-recreate -d tools

# Add root CA to container (obviates need for supplying it at CLI login '--ca-cert-path')
docker-compose exec tools bash -c "cp /etc/kafka/secrets/snakeoil-ca-1.crt /usr/local/share/ca-certificates && /usr/sbin/update-ca-certificates"
docker compose exec tools bash -c "cp /etc/kafka/secrets/snakeoil-ca-1.crt /usr/local/share/ca-certificates && /usr/sbin/update-ca-certificates"


# Bring up base kafka cluster
docker-compose up --no-recreate -d zookeeper kafka1 kafka2
docker compose up --no-recreate -d zookeeper kafka1 kafka2

# Verify MDS has started
MAX_WAIT=150
Expand All @@ -72,10 +72,10 @@ retry $MAX_WAIT host_check_up kafka1 || exit 1
retry $MAX_WAIT host_check_up kafka2 || exit 1

echo "Creating role bindings for principals"
docker-compose exec tools bash -c "/tmp/helper/create-role-bindings.sh" || exit 1
docker compose exec tools bash -c "/tmp/helper/create-role-bindings.sh" || exit 1

# Workaround for setting min ISR on topic _confluent-metadata-auth
docker-compose exec kafka1 kafka-configs \
docker compose exec kafka1 kafka-configs \
--bootstrap-server kafka1:12091 \
--entity-type topics \
--entity-name _confluent-metadata-auth \
Expand All @@ -86,11 +86,11 @@ docker-compose exec kafka1 kafka-configs \


# Bring up more containers
docker-compose up --no-recreate -d schemaregistry connect control-center
docker compose up --no-recreate -d schemaregistry connect control-center

echo
echo -e "Create topics in Kafka cluster:"
docker-compose exec tools bash -c "/tmp/helper/create-topics.sh" || exit 1
docker compose exec tools bash -c "/tmp/helper/create-topics.sh" || exit 1

# Verify Kafka Connect Worker has started
MAX_WAIT=240
Expand Down Expand Up @@ -130,7 +130,7 @@ echo
#-------------------------------------------------------------------------------

# Start more containers
docker-compose up --no-recreate -d ksqldb-server ksqldb-cli restproxy
docker compose up --no-recreate -d ksqldb-server ksqldb-cli restproxy

# Verify ksqlDB server has started
echo
Expand All @@ -153,22 +153,22 @@ ${DIR}/consumers/listen_WIKIPEDIA_COUNT_GT_1.sh
echo
echo
echo "Start the Kafka Streams application wikipedia-activity-monitor"
docker-compose up --no-recreate -d streams-demo
docker compose up --no-recreate -d streams-demo
echo "..."


#-------------------------------------------------------------------------------


# Verify Docker containers started
if [[ $(docker-compose ps) =~ "Exit 137" ]]; then
echo -e "\nERROR: At least one Docker container did not start properly, see 'docker-compose ps'. Did you increase the memory available to Docker to at least 8 GB (default is 2 GB)?\n"
if [[ $(docker compose ps) =~ "Exit 137" ]]; then
echo -e "\nERROR: At least one Docker container did not start properly, see 'docker compose ps'. Did you increase the memory available to Docker to at least 8 GB (default is 2 GB)?\n"
exit 1
fi

echo
echo -e "\nAvailable LDAP users:"
#docker-compose exec openldap ldapsearch -x -h localhost -b dc=confluentdemo,dc=io -D "cn=admin,dc=confluentdemo,dc=io" -w admin | grep uid:
#docker compose exec openldap ldapsearch -x -h localhost -b dc=confluentdemo,dc=io -D "cn=admin,dc=confluentdemo,dc=io" -w admin | grep uid:
curl -u mds:mds -X POST "https://localhost:8091/security/1.0/principals/User%3Amds/roles/UserAdmin" \
-H "accept: application/json" -H "Content-Type: application/json" \
-d "{\"clusters\":{\"kafka-cluster\":\"does_not_matter\"}}" \
Expand Down
24 changes: 12 additions & 12 deletions scripts/validate/validate_rest_proxy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -35,62 +35,62 @@ topic="users"
subject="$topic-value"
group="my_avro_consumer"

docker-compose exec tools bash -c "confluent iam rbac role-binding create \
docker compose exec tools bash -c "confluent iam rbac role-binding create \
--principal $CLIENT_PRINCIPAL \
--role ResourceOwner \
--resource Subject:$subject \
--kafka-cluster-id $KAFKA_CLUSTER_ID \
--schema-registry-cluster-id $SR"

# Register a new Avro schema for topic 'users'
docker-compose exec schemaregistry curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt --data '{ "schema": "[ { \"type\":\"record\", \"name\":\"user\", \"fields\": [ {\"name\":\"userid\",\"type\":\"long\"}, {\"name\":\"username\",\"type\":\"string\"} ]} ]" }' -u $CLIENT_NAME:$CLIENT_NAME https://schemaregistry:8085/subjects/$subject/versions
docker compose exec schemaregistry curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt --data '{ "schema": "[ { \"type\":\"record\", \"name\":\"user\", \"fields\": [ {\"name\":\"userid\",\"type\":\"long\"}, {\"name\":\"username\",\"type\":\"string\"} ]} ]" }' -u $CLIENT_NAME:$CLIENT_NAME https://schemaregistry:8085/subjects/$subject/versions

# Get the Avro schema id
schemaid=$(docker exec schemaregistry curl -s -X GET --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt -u $CLIENT_NAME:$CLIENT_NAME https://schemaregistry:8085/subjects/$subject/versions/1 | jq '.id')

# Go through steps at https://docs.confluent.io/platform/current/tutorials/cp-demo/docs/index.html#crest-long?utm_source=github&utm_medium=demo&utm_campaign=ch.cp-demo_type.community_content.cp-demo#confluent-rest-proxy

docker-compose exec tools bash -c "confluent iam rbac role-binding create \
docker compose exec tools bash -c "confluent iam rbac role-binding create \
--principal $CLIENT_PRINCIPAL \
--role DeveloperWrite \
--resource Topic:$topic \
--kafka-cluster-id $KAFKA_CLUSTER_ID"

docker-compose exec restproxy curl -X POST -H "Content-Type: application/vnd.kafka.avro.v2+json" -H "Accept: application/vnd.kafka.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt --data '{"value_schema_id": '"$schemaid"', "records": [{"value": {"user":{"userid": 1, "username": "Bunny Smith"}}}]}' -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/topics/$topic
docker compose exec restproxy curl -X POST -H "Content-Type: application/vnd.kafka.avro.v2+json" -H "Accept: application/vnd.kafka.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt --data '{"value_schema_id": '"$schemaid"', "records": [{"value": {"user":{"userid": 1, "username": "Bunny Smith"}}}]}' -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/topics/$topic

docker-compose exec restproxy curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt --data '{"name": "my_consumer_instance", "format": "avro", "auto.offset.reset": "earliest"}' -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/consumers/$group
docker compose exec restproxy curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt --data '{"name": "my_consumer_instance", "format": "avro", "auto.offset.reset": "earliest"}' -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/consumers/$group

docker-compose exec restproxy curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt --data '{"topics":["users"]}' -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/consumers/$group/instances/my_consumer_instance/subscription
docker compose exec restproxy curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt --data '{"topics":["users"]}' -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/consumers/$group/instances/my_consumer_instance/subscription

docker-compose exec tools bash -c "confluent iam rbac role-binding create \
docker compose exec tools bash -c "confluent iam rbac role-binding create \
--principal $CLIENT_PRINCIPAL \
--role ResourceOwner \
--resource Group:$group \
--kafka-cluster-id $KAFKA_CLUSTER_ID"

docker-compose exec tools bash -c "confluent iam rbac role-binding create \
docker compose exec tools bash -c "confluent iam rbac role-binding create \
--principal $CLIENT_PRINCIPAL \
--role DeveloperRead \
--resource Topic:$topic \
--kafka-cluster-id $KAFKA_CLUSTER_ID"

# Note: Issue this command twice due to https://github.com/confluentinc/kafka-rest/issues/432
docker-compose exec restproxy curl -X GET -H "Accept: application/vnd.kafka.avro.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/consumers/$group/instances/my_consumer_instance/records
output=$(docker-compose exec restproxy curl -X GET -H "Accept: application/vnd.kafka.avro.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/consumers/$group/instances/my_consumer_instance/records)
docker compose exec restproxy curl -X GET -H "Accept: application/vnd.kafka.avro.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/consumers/$group/instances/my_consumer_instance/records
output=$(docker compose exec restproxy curl -X GET -H "Accept: application/vnd.kafka.avro.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/consumers/$group/instances/my_consumer_instance/records)
if [[ $output =~ "Bunny Smith" ]]; then
printf "\nPASS: Output matches expected output:\n$output"
else
printf "\nFAIL: Output does not match expected output:\n$output"
fi

docker-compose exec restproxy curl -X DELETE -H "Content-Type: application/vnd.kafka.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/consumers/$group/instances/my_consumer_instance
docker compose exec restproxy curl -X DELETE -H "Content-Type: application/vnd.kafka.v2+json" --cert /etc/kafka/secrets/restproxy.certificate.pem --key /etc/kafka/secrets/restproxy.key --tlsv1.2 --cacert /etc/kafka/secrets/snakeoil-ca-1.crt -u $CLIENT_NAME:$CLIENT_NAME https://restproxy:8086/consumers/$group/instances/my_consumer_instance


#################

echo -e "\n\n\nValidating the embedded REST Proxy...\n"

docker-compose exec tools bash -c "confluent iam rbac role-binding create \
docker compose exec tools bash -c "confluent iam rbac role-binding create \
--principal User:appSA \
--role ResourceOwner \
--resource Topic:dev_users \
Expand Down