To upgrade the Splunk Operator for Kubernetes, you will overwrite the prior Operator release with the latest version. Once the lastest version of splunk-operator-namespace.yaml
(see below) is applied the CRD's are updated and Operator deployment is updated with newer version of Splunk Operator image. Any new spec defined by the operator will be applied to the pods managed by Splunk Operator for Kubernetes.
A Splunk Operator for Kubernetes upgrade might include support for a later version of the Splunk Enterprise Docker image. In that scenario, after the Splunk Operator completes its upgrade, the pods managed by Splunk Operator for Kubernetes will be restarted using the latest Splunk Enterprise Docker image.
- Note: The Splunk Operator does not provide a way to downgrade to a previous release.
- Before you upgrade, review the Splunk Operator change log page for information on changes made in the latest release. The Splunk Enterprise Docker image compatibility is noted in each release version.
- If the Splunk Enterprise Docker image changes, review the Splunk Enterprise Upgrade Readme page before upgrading.
- For general information about Splunk Enterprise compatibility and the upgrade process, see How to upgrade Splunk Enterprise.
- If you use forwarders, verify the Splunk Enterprise version compatibility with the forwarders in the Compatibility between forwarders and Splunk Enterprise indexers documentation.
- Download the latest Splunk Operator installation yaml file.
wget -O splunk-operator-namespace.yaml https://github.com/splunk/splunk-operator/releases/download/2.6.1/splunk-operator-namespace.yaml
2. (Optional) Review the file and update it with your specific customizations used during your install.
3. Upgrade the Splunk Operator.
kubectl apply -f splunk-operator-namespace.yaml --server-side --force-conflicts
After applying the yaml, a new operator pod will be created and the existing operator pod will be terminated. Example:
kubectl get pods
NAME READY STATUS RESTARTS AGE
splunk-operator-controller-manager-75f5d4d85b-8pshn 1/1 Running 0 5s
If a Splunk Operator release changes the custom resource (CRD) API version, the administrator is responsible for updating their Custom Resource specification to reference the latest CRD API version.
Splunk Operator follows the upgrade path steps mentioned in Splunk documentation. If a Splunk Operator release includes an updated Splunk Enterprise Docker image, the operator upgrade will also initiate pod restart using the latest Splunk Enterprise Docker image. To follow the best practices described under the [General Process to Upgrade the Splunk Enterprise], a recommeded upgrade path is followed while initiating pod restarts of different Splunk Instances. At each step, if a particular CR instance exists, a certain flow is imposed to ensure that each instance is updated in the correct order. After an instance is upgraded, the Operator verifies if the upgrade was successful and all the components are working as expected. If any unexpected behaviour is detected, the process is terminated.
If a Splunk Operator release changes the custom resource (CRD) API version, the administrator is responsible for updating their Custom Resource specification to reference the latest CRD API version.
Upgrading the Splunk Operator from 1.0.5 or older version to latest is a new installation rather than an upgrade from current operator installation. The older Splunk Operator must be cleaned up before installing the new version. You should upgrade operator to 1.1.0 first and then use normal upgrade process from 1.1.0 to latest.
Script upgrade-to-1.1.0.sh helps you to do the cleanup, and install 1.1.0 Splunk operator. The script expects the current namespace where the operator is installed and the path to the latest operator deployment manifest file. The script performs the following steps
- Backup of all the operator resources within the namespace like ** service-account, deployment, role, role-binding, cluster-role, cluster-role-binding
- Deletes all the old Splunk Operator resources and deployment
- Installs the operator in Splunk-operator namespace.
- Download the upgrade script.
wget -O operator-upgarde.sh https://github.com/splunk/splunk-operator/releases/download/1.1.0/upgrade-to-1.1.0.sh
- Download the 1.1.0 Splunk Operator installation yaml file.
wget -O splunk-operator-install.yaml https://github.com/splunk/splunk-operator/releases/download/1.1.0/splunk-operator-install.yaml
-
(Optional) Review the file and update it with your specific customizations used during your install.
-
Upgrade the Splunk Operator.
Set KUBECONFIG and run the already downloaded operator-upgrade.sh
script with the following mandatory arguments
current_namespace
current namespace where operator is installedmanifest_file
: path to 1.1.0 Splunk Operator manifest file
>upgrade-to-1.1.0.sh --current_namespace=splunk-operator --manifest_file=splunk-operator-install.yaml
Note: This script can be run from Mac
or Linux
system. To run this script on Windows
, use cygwin
.
If Splunk Operator is installed clusterwide then
Edit deployment
splunk-operator-controller-manager-<podid>
in splunk-operator
namespace, set WATCH_NAMESPACE
field to the namespace that needs to be monitored by Splunk Operator
...
env:
- name: WATCH_NAMESPACE
value: "splunk-operator"
- name: RELATED_IMAGE_SPLUNK_ENTERPRISE
value: splunk/splunk:9.1.3
- name: OPERATOR_NAME
value: splunk-operator
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
...
If a Splunk Operator release includes an updated Splunk Enterprise Docker image, the operator upgrade will also initiate pod restart using the latest Splunk Enterprise Docker image.
To verify the Splunk Operator has been upgraded to the release image in splunk-operator-install.yaml
, you can check the version of the operator image in the deployment spec and subsequently the image in Pod spec of the newly deployed operator pod.
Example:
kubectl get deployment splunk-operator -o yaml | grep -i image
image: docker.io/splunk/splunk-operator:<desired_operator_version>
imagePullPolicy: IfNotPresent
kubectl get pod <splunk_operator_pod> -o yaml | grep -i image
image: docker.io/splunk/splunk-operator:<desired_operator_version>
imagePullPolicy: IfNotPresent
To verify that a new Splunk Enterprise Docker image was applied to a pod, you can check the version of the image. Example:
kubectl get pods splunk-<crname>-monitoring-console-0 -o yaml | grep -i image
image: splunk/splunk:9.1.3
imagePullPolicy: IfNotPresent
The Splunk Operator mostly adheres to the upgrade path steps delineated in the Splunk documentation. All pods of the custom resources are deleted and redeployed sequentially. In cases where multi-zone Indexer clusters are utilized, they undergo redeployment zone by zone. Each pod upgrade is meticulously verified to ensure a successful process, with thorough checks conducted to confirm that everything is functioning as expected. If there are multiple pods per Custom Resource, the pods are terminated and re-deployed in a descending order with the highest numbered pod going first.
This is an example of the process followed by the Splunk Operator if the operator version is upgraded and a later Splunk Enterprise Docker image is available. Pod termination and redeployment occur in the below mentioned order based on the recommended upgrade path:
- Splunk Operator deployment pod
- Standalone
- License manager
- ClusterManager
- Search Head cluster
- Indexer Cluster
- Monitoring Console
Note: The order above assumes that the custom resources are linked via references. If there are Custom resources without references they will be deleted/redeployed indepedentlty of the order.