Issue exposing strimzi using Openshift route #7700
Replies: 2 comments
-
You do not share any full configs of your broker, of the clients, full logs etc. So it is hard to comment. Some things I noticed:
That is not how it works. To get Kafka exposed using OpenShift routes, it needs ot use TLS passthrough so that the connection gets directly into the broker. So if you have some special setup where the loadbalancer interferes with the connection and does things such as TLS termination, then it will not work.
I'm not sure what do you try to follow. But you are not expected to create any services. The external listener in your configuration should create all the services and routes.
As said above ... there should be no certificate between the client and the route. If it interferes with TLS, it will not work.
Best is to download the Kafka binaries and use the |
Beta Was this translation helpful? Give feedback.
-
Hi and thanks for your quick reply.
I confirm that services and routes was automatically made by the listener configuration, but in the article https://strimzi.io/blog/2019/04/30/accessing-kafka-part-3/, as explained by you, this service need to be associated with the statefulset.kubernetes.io/pod-name. It was just to say that those services/routes were exactly as expected from a standard setup. Thank again for you help. |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I'm trying to configure external access for strimzi-kafka on Openshift (internal access has always worked well), according to this
article on the strimzi blog for the route option. I read also similar previous github threads like this:
github.com//issues/6041
Mutual TLS authentication is not a must for us, so I tried to get things working initially without mutual authentication, but with no success.
Our environment: OCP version 3.11, Strimzi 0.17.0. Openshift nodes are on AWS cloud, so there is also a load balancer managing the *.apps.ocp.cluster-domain certificate.
For testing the reachability I was using the Offset Explorer 2.3 client.
My kafka configuration is like this:
According to the old listener sintax, TLS is enabled by default, so I saved the openshift ca certificate file:
oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt
And imported it in a java keystore (of type .jks)
I also create a "dedicated service for each of the brokers" (only one broker in this case) as suggested on the blog post.
Because between client and the OCP route there is the AWS load balancer with *.apps.ocp.cluster-domain wildcard certificate, I obtained a TLS handshake error. After adding also the AWS root certificate in the truststore, I was able to made a successful TLS handshake, but now there is a timeout error connecting to broker. From client log I can see:
I also tried with another kafka client (Kadeck by Xeoteck, same boostrap url, same truststore configured) obtaining this error:
I missed something on the broker side? Checked the services and routes (for both bootstrap and broker): the endpoint is present and binded to target port 9094 as expected for external access...
Any suggestion is welcome, thanks
Beta Was this translation helpful? Give feedback.
All reactions