-
@scholzj , we have migrated recently to version 0.31.1 , with kafka version as 3.2.3 , previous versions had these statefulsets for restarting kafka and zookeeper in sequence to restart the kafka cluster , but now if i try to scaledown the same way using strimzipodsets , it is throwing error , but i can get data in command , kubectl get sps -n kafka so please could you help me scaling down and scaling up strimzipodsets , otherwise everytime i had to restart the kafka cluster , i had to remove the cluster definition and redeploy it , which is scary in production , please could you help Thanks, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
If you want to scale the cluster up or down, you always have to do it in the Kafka custom resource. It was like that already before with StatefulSets and it remains this way with StrimziPodSets. |
Beta Was this translation helpful? Give feedback.
You cannot scale Kafka to 0. It is a stateful application and the data have to live somewhere.
If you wanna stop it, then call it that way because it is not scaling and its confusing. And if you want to stop it, you can just stop the operator and delete the StrimziPodSet resource and have the operator recreate it when you want to continue with the cluster.