A hands-on lab to demonstrate an end-to-end integration between a web application using JMS to MQ and then Kafka.
The application is sending sold item data from different stores to MQ queue, which is a source for MQ Kafka connector to
write the item sold to the items
kafka topic.
In this readme we present local deployment to your workstation deployments with Kafka Confluent or Kafka Strimzi and the deployment of Kafka Confluent on OpenShift.
For IBM Event Streams and IBM MQ with Cloud Pak for Integration, we have different labs described in the EDA use cases
- Developers and architects.
- Lab 1: Run Confluent and IBM MQ locally and test the integration between MQ queues and Kafka topics using the Confluent Kafka MQ connectors.
- Lab 2: Deploy the connector scenario to an OpenShift cluster with Confluent Platform and IBM MQ already deployed.
You will need the following:
This lab scenario utilizes the officially supported IBM MQ connectors from Confluent, IBM MQ Source Connector and IBM MQ Sink Connector. Both of these connectors require the IBM MQ client jar (com.ibm.mq.allclient.jar
) to be downloaded separately and included with any runtime deployments. This is covered below.
-
Clone this repository:
git clone https://github.com/ibm-cloud-architecture/eda-lab-mq-to-kafka.git cd eda-lab-mq-to-kafka/confluent
-
Download the
confluentinc-kafka-connect-ibmmq-11.0.8.zip
file from https://www.confluent.io/hub/confluentinc/kafka-connect-ibmmq and copy the expanded contents (the entireconfluentinc-kafka-connect-ibmmq-11.0.8
folder) to./data/connect-jars
:# Verify latest version of Confluent MQ Connector curl -s -L https://www.confluent.io/hub/confluentinc/kafka-connect-ibmmq | \ grep --only-matching "confluent-hub install confluentinc/kafka-connect-ibmmq\:[0-9]*\.[0-9]*\.[0-9]*" | \ sed "s/confluent-hub install confluentinc\/kafka-connect-ibmmq\://g" # Latest version at the time of this writing was 11.0.8 # Manually download the file from https://www.confluent.io/hub/confluentinc/kafka-connect-ibmmq unzip ~/Downloads/confluentinc-kafka-connect-ibmmq-11.0.8 -d ./data/connect-jars/
-
Download the required IBM MQ client jars:
curl -s https://repo1.maven.org/maven2/com/ibm/mq/com.ibm.mq.allclient/9.2.2.0/com.ibm.mq.allclient-9.2.2.0.jar -o com.ibm.mq.allclient-9.2.2.0.jar cp com.ibm.mq.allclient-9.2.2.0.jar data/connect-jars/confluentinc-kafka-connect-ibmmq-11.0.8/lib/.
-
Start the containers locally by launching the
docker-compose
stack:docker-compose up -d
-
Wait for the MQ Queue Manager to successfully start:
docker logs -f ibmmq # Wait for the following lines: # xyzZ Started web server # xyzZ AMQ5041I: The queue manager task 'AUTOCONFIG' has ended. [CommentInsert1(AUTOCONFIG)]
-
Access the Store Simulator web application via http://localhost:8080/#/simulator.
- Under the Simulator tab, select IBMMQ as the backend, any number of events you wish to simulate, and click the Simulate button.
-
Access the IBM MQ Console via https://localhost:9443.
- Login using the default admin credentials of
admin
/passw0rd
and accepting any security warnings for self-signed certificate usage. - Navigate to QM1 management screen via the Manage QM1 tile.
- Click on the DEV.QUEUE.1 queue to view the simulated messages from the Store Simulator.
- Login using the default admin credentials of
-
Configure the Kafka Connector instance via the Kafka Connect REST API
curl -i -X PUT -H "Content-Type:application/json" \ http://localhost:8083/connectors/eda-store-source/config \ -d @kustomize/environment/kconnect/config/mq-confluent-source.json
You should receive a response similar to the following:
HTTP/1.1 201 Created Date: Tue, 13 Apr 2021 18:16:50 GMT Location: http://localhost:8083/connectors/eda-store-source Content-Type: application/json Content-Length: 634 Server: Jetty(9.4.24.v20191120) {"name":"eda-store-source","config":{"connector.class":"io.confluent.connect.ibm.mq.IbmMQSourceConnector","tasks.max":"1","key.converter":"org.apache.kafka.connect.storage.StringConverter","value.converter":"org.apache.kafka.connect.json.JsonConverter","mq.hostname":"ibmmq","mq.port":"1414","mq.transport.type":"client","mq.queue.manager":"QM1","mq.channel":"DEV.APP.SVRCONN","mq.username":"app","mq.password":"adummypasswordusedlocally","jms.destination.name":"DEV.QUEUE.1","jms.destination.type":"QUEUE","kafka.topic":"items","confluent.topic.bootstrap.servers":"broker:29092","name":"eda-store-source"},"tasks":[],"type":"source"}
For more details on the Kafka Connect REST API, you can visit the Confluent Docs. This step can also be performed via the Confluent Control Center UI.
-
Access Confluent Control Center via http://localhost:9021. (NOTE: This component sleeps for two minutes upon initial startup.)
- Click on your active cluster
- Click on Connect in the left-nav menu, then
connect
in the Connect Cluster list. - You should see your
Running
eda-store-source connector. - Click on Topics in the left-nav menu and select
items
in the Topics list. - Click on the Messages tab and enter
0
in the Offset textbox. - You should see all the messages that were previously in your
DEV.QUEUE.1
queue now in youritems
topic and they are no longer in the MQ queue!
-
To stop the environment once you are complete:
docker-compose down
Lab contents:
- Pre-requisites
- Scenario walkthrough
- Deploy MQ queue manager with remote access enabled
- Deploy Store Simulator application
- Create custom Kafka Connect container images
- Update Confluent Platform container deployments
- Configure MQ Connector
- Verify end-to-end connectivity
- Lab complete!
You need the following:
- git
- jq
- OpenShift oc CLI
- openssl & keytool - Installed as part of your Linux/Mac OS X-based operating system and Java JVM respectively.
- Confluent Platform (Kafka cluster) deployed on Red Hat OpenShift via Confluent Operator
- IBM MQ Operator on Red Hat OpenShift
-
Clone this repository. All subsequent commands are run from the root directory of the cloned repository.
git clone https://github.com/ibm-cloud-architecture/eda-lab-mq-to-kafka.git cd eda-lab-mq-to-kafka
-
The lab setup can be run with any number of projects between the three logical components below. Update the environment variables below with your respective projects for each component and the instructions listed below will always reference the correct project to run the commands against. All three of these values below can be the same value if all components are installed into a single project.
export PROJECT_CONFLUENT_PLATFORM=my-confluent-platform-project export PROJECT_MQ=my-ibm-mq-project export PROJECT_STORE_SIMULATOR=my-store-simulator-project
NOTE: If any of the above projects are not created in your OpenShift cluster, you will need to create them via the
oc new-project PROJECT_NAME
command.
IBM MQ queue managers that are exposed by a Route on OpenShift require TLS-enabled security, so we will first create a SSL certificate pair and truststore for both queue manager and client use, respectively.
-
Create TLS certificate and key for use by the MQ QueueManager custom resource:
openssl req -newkey rsa:2048 -nodes -subj "/CN=localhost" -x509 -days 3650 \ -keyout ./kustomize/environment/mq/base/certificates/tls.key \ -out ./kustomize/environment/mq/base/certificates/tls.crt
-
Create TLS client truststore for use by the Store Simulator and Kafka Connect applications:
keytool -import -keystore ./kustomize/environment/mq/base/certificates/mq-tls.jks \ -file ./kustomize/environment/mq/base/certificates/tls.crt \ -storepass my-mq-password -noprompt -keyalg RSA -storetype JKS
-
Create the OpenShift resources by applying the Kustomize YAMLs:
oc project ${PROJECT_MQ} oc apply -k ./kustomize/environment/mq -n ${PROJECT_MQ}
REFERENCE MATERIAL: Create a TLS-secured queue manager via Example: Configuring TLS
-
Update the
store-simulator
ConfigMap YAML to point to the specific MQ queue manager's Route:export MQ_ROUTE_HOST=$(oc get route store-simulator-mq-ibm-mq-qm -o jsonpath="{.spec.host}" -n ${PROJECT_MQ}) cat ./kustomize/apps/store-simulator/base/configmap.yaml | envsubst | \ tee ./kustomize/apps/store-simulator/base/configmap.yaml >/dev/null cat ./kustomize/apps/store-simulator/base/configmap-mq-ccdt.yaml | envsubst | \ tee ./kustomize/apps/store-simulator/base/configmap-mq-ccdt.yaml >/dev/null
-
The Kafka Connect instance acts as an MQ client and requires the necessary truststore information for secure connecitivity. Copy the truststore secret that was generated by the Store Simulator component deployment to the local Confluent project for re-use by the Connector:
oc get secret -n ${PROJECT_MQ} -o json store-simulator-mq-truststore | \ jq -r ".metadata.namespace=\"${PROJECT_STORE_SIMULATOR}\"" | \ oc apply -n ${PROJECT_STORE_SIMULATOR} -f -
NOTE: This step is only required if you are running MQ in a different project than the Store Simulator application.
-
Apply Kustomize YAMLs:
oc project ${PROJECT_STORE_SIMULATOR} oc apply -k ./kustomize/apps/store-simulator -n ${PROJECT_STORE_SIMULATOR}
-
Send messages to MQ via the store simulator application:
- The store simulator user interface is exposed as a Route on OpenShift:
oc get route store-simulator -o jsonpath="{.spec.host}" -n ${PROJECT_STORE_SIMULATOR}
- Access this Route via HTTP in your browser.
- Go to the SIMULATOR tab.
- Select the IBMMQ radio button and use the slider to select the number of messages to send.
- Click the Simulate button and wait for the Messages Sent window to be populated.
- The store simulator user interface is exposed as a Route on OpenShift:
-
Validate messages received in MQ Web Console:
- The MQ Web Console is exposed as a Route on OpenShift:
oc get route store-simulator-mq-ibm-mq-web -o jsonpath="{.spec.host}" -n ${PROJECT_MQ}
- Go to this route via HTTPS in your browser and login.
- If you need to determine your Default authentication admin password, it can be retrieved via the following command:
oc get secret -n {CP4I installation project} ibm-iam-bindinfo-platform-auth-idp-credentials -o json | jq -r .data.admin_password | base64 -d -
- Click the QM1 tile.
- Click the DEV.QUEUE.1 queue.
- Verify that the queue depth is equal to the number of messages sent from the store application.
- The MQ Web Console is exposed as a Route on OpenShift:
-
Apply the Kafka Connect components from the Kustomize YAMLs:
oc project ${PROJECT_CONFLUENT_PLATFORM} oc apply -k ./kustomize/environment/kconnect/ -n ${PROJECT_CONFLUENT_PLATFORM} oc logs -f buildconfig/confluent-connect-mq -n ${PROJECT_CONFLUENT_PLATFORM}
This creates two ImageStreamTags that are based on the official Confluent Platform container images, which can now be referenced locally in the cluster by the Connect Cluster pods. We then create a BuildConfig to create a custom build that provides a container image with the required Confluent Platform MQ Connector binaries pre-installed, which in turn creates an additional ImageStreamTag that allows us to update the Connect Cluster pods to use the new images.
-
The Kafka Connect instance acts as an MQ client and requires the necessary truststore information for secure connectivity. Copy the truststore secret that was generated by the Store Simulator component deployment to the local Confluent project for re-use by the Connector:
oc get secret -n ${PROJECT_MQ} -o json store-simulator-mq-truststore | \ jq -r ".metadata.namespace=\"${PROJECT_CONFLUENT_PLATFORM}\"" | \ oc apply -n ${PROJECT_CONFLUENT_PLATFORM} -f -
NOTE: This step is only required if you are running MQ in a different project than the Confluent Platform.
-
Next, we need to patch the ConfigMap the Connectors pod uses to inject JVM configuration parameters (
jvm.config
) into the Connect runtime. We will do this by patching the PhysicalStatefulCluster that manages the Connect cluster. This is required as we are using a non-IBM JVM inside the Confluent-provided Connect images and the default SSL Cipher Suite Mappings used by default are incompatible. By adding the-Dcom.ibm.mq.cfg.useIBMCipherMappings=false
JVM configuration parameter, we allow the OpenJDK JVM to leverage the Oracle-compatible Cipher Suite Mappings instead.oc get psc/connectors -o yaml -n ${PROJECT_CONFLUENT_PLATFORM} | \ sed 's/ -Dcom.sun.management.jmxremote.ssl=false/ -Dcom.sun.management.jmxremote.ssl=false\n -Dcom.ibm.mq.cfg.useIBMCipherMappings=false/' | \ oc replace -n ${PROJECT_CONFLUENT_PLATFORM} -f -
REFERENCE: If you encounter CipherSuite issues in the Connector logs, reference TLS CipherSpecs and CipherSuites in IBM MQ classes for JMS from the IBM MQ documentation.
This lab assumes that Confluent Platform is deployed via https://github.ibm.com/ben-cornwell/confluent-operator, which utilizes Confluent Operator Quick Start and deploys the Schema Registry, Replicator, Connect, and Control Center components in a single Helm release. This is problematic when following Step 5 of the Deploy Confluent Connectors instructions, as the image registries required cannot be mixed between different components in the same release. Connect requires the internal OpenShift registry for our custom images we just created, while the other components still require the original docker.io registry.
To circumvent this issue, we can manually patch the Kafka Connect PhysicalStatefulCluster
custom resource for the Confluent Operator to propogate changes down to the pod level and take advantage of the newly built custom Connect images (as well as the TLS truststore files).
oc patch psc/connectors --type merge --patch "$(cat ./kustomize/environment/kconnect/infra/confluent-connectors-psc-patch.yaml | envsubst)" -n ${PROJECT_CONFLUENT_PLATFORM}
However, if Confluent Platform was deployed via the instructions available at Install Confluent Operator and Confluent Platform and Connect is available as it's own Helm release (ie helm get notes connectors
), you can follow Step 5 of the Deploy Confluent Connectors instructions to update the Confluent custom resources via Helm. If this path is taken, you may need to reapply the useIBMCipherMappings
patch from the previous section.
A helm upgrade
command may look something like the following:
helm upgrade --install connectors \
--values /your/original/values/file/values-file.yaml \
--namespace ${PROJECT_CONFLUENT_PLATFORM} \
--set "connect.enabled=true" \
--set "connect.mountedSecrets[0].secretRef=store-simulator-mq-truststore" \
--set "global.provider.registry.fqdn=image-registry.openshift-image-registry.svc:5000" \
--set "connect.image.repository=${PROJECT_CONFLUENT_PLATFORM}/cp-server-connect-operator" \
--set "connect.image.tag=6.1.1.0-custom-mq" \
--set "global.initContainer.image.repository=${PROJECT_CONFLUENT_PLATFORM}/cp-init-container-operator" \
--set "global.initContainer.image.tag=6.1.1.0" \
./confluent-operator-1.7.0/helm/confluent-operator
-
Log in to Confluent Control Center and navigate to Home > controlcenter.cluster > Connect > connect-default > Add connector and verify that the IbmMqSinkConnector and IbmMQSourceConnector are now available as connector options.
-
Optionally, you can run the following
curl
command to verify via the REST API:curl --insecure --silent https://$(oc get route connectors-bootstrap -o jsonpath="{.spec.host}" -n ${PROJECT_CONFLUENT_PLATFORM})/connector-plugins | jq .
-
Create the target Kafka topic in Confluent Platform:
- In the Confluent Control Center and navigate to your Connect cluster via Home > controlcenter.cluster > Topics and click Add a topic.
- Enter items.openshift (or your own custom topic name).
- Click Create with defaults.
-
Generate a customized MQ connector configuration file based on your local environment:
export KAFKA_BOOTSTRAP=$(oc get route kafka-bootstrap -o jsonpath="{.spec.host}" -n ${PROJECT_CONFLUENT_PLATFORM}):443 # Generate the configured Kafka Connect connector configuration file cat ./kustomize/environment/kconnect/config/mq-confluent-source-openshift.json | envsubst > ./kustomize/environment/kconnect/config/mq-confluent-source-openshift-configured.json
NOTE: You will need to manually edit the generated
./kustomize/environment/kconnect/config/mq-confluent-source-openshift-configured.json
file if you used a topic name other thanitems.openshift
. -
Deploy an MQ Connector instance by choosing one of the two paths:
-
You can deploy a connector instance via the Confluent Control Center UI:
- Log in to the Confluent Control Center and navigate to your Connect cluster via Home > controlcenter.cluster > Connect > connect-default.
- Click Upload connector config file and browse to
eda-lab-mq-to-kafka/kustomize/environment/kconnect/config/mq-confluent-source-openshift-configured.json
- Click Continue.
- Click Launch.
-
You can deploy a connector instance via the Kafka Connect REST API:
export CONNECTORS_BOOTSTRAP=$(oc get route connectors-bootstrap -o jsonpath="{.spec.host}" -n ${PROJECT_CONFLUENT_PLATFORM}) curl -i -X PUT -H "Content-Type:application/json" --insecure \ https://$CONNECTORS_BOOTSTRAP/connectors/eda-store-source/config \ -d @kustomize/environment/kconnect/config/mq-confluent-source-openshift-configured.json
REFERENCE: If you encounter CipherSuite issues in the Connector logs, reference TLS CipherSpecs and CipherSuites in IBM MQ classes for JMS from the IBM MQ documentation.
-
-
Validate records received in Kafka topic in Confluent Platform:
- Log in to the Confluent Control Center and navigate to your Connect cluster via Home > controlcenter.cluster > Topics > items.openshift.
- Click Messages.
- Enter
0
in the offset textbox and hit Enter. - You should see all the messages you sent to the MQ queue now reside in Kafka topics.
-
Validate MQ queues have been drained via the MQ Web Console:
- The MQ Web Console is exposed as a route on OpenShift:
oc get route store-simulator-mq-ibm-mq-web -o jsonpath="{.spec.host}" -n ${PROJECT_MQ}
- Go to this route via HTTPS in your browser and login.
- If you need to determine your Default authentication admin password, it can be retrieved via the following command:
oc get secret -n {CP4I installation project} ibm-iam-bindinfo-platform-auth-idp-credentials -o json | jq -r .data.admin_password | base64 -d -
- Click the QM1 tile.
- Click the DEV.QUEUE.1 queue.
- Verify that the queue depth is zero messages.
- The MQ Web Console is exposed as a route on OpenShift:
To clean up the resources deployed via the lab scenario:
- Resources in the
${PROJECT_STORE_SIMULATOR}
can be removed via:oc delete -k ./kustomize/apps/store-simulator/ -n ${PROJECT_STORE_SIMULATOR}
- Resources in the
${PROJECT_MQ}
can be removed via:oc delete -k ./kustomize/environment/mq/ -n ${PROJECT_MQ}
- Resources in the
${PROJECT_CONFLUENT_PLATFORM}
project can be removed, but also require a reset of the Connectors Helm release to the original container images settings:buildconfig/confluent-connect-mq
imagestream.image.openshift.io/cp-init-container-operator
imagestream.image.openshift.io/cp-server-connect-operator
secret/store-simulator-mq-truststore