diff --git a/experiments/generic/container_kill/README.md b/experiments/generic/container_kill/README.md index a7e0b066808..43ccb67e2cb 100644 --- a/experiments/generic/container_kill/README.md +++ b/experiments/generic/container_kill/README.md @@ -1,3 +1,5 @@ +## Experiment Metadata + diff --git a/experiments/generic/drain_node/README.md b/experiments/generic/node_drain/README.md similarity index 76% rename from experiments/generic/drain_node/README.md rename to experiments/generic/node_drain/README.md index fff5f6d85a5..578ec9439e4 100644 --- a/experiments/generic/drain_node/README.md +++ b/experiments/generic/node_drain/README.md @@ -7,8 +7,8 @@ - + - +
Name Documentation Link
Drain Node Node Drain This experiment drains the node where application pod is running and verifies if it is scheduled on another available node. Here Here
diff --git a/experiments/generic/drain_node/drain_node_ansible_logic.yml b/experiments/generic/node_drain/node_drain_ansible_logic.yml similarity index 100% rename from experiments/generic/drain_node/drain_node_ansible_logic.yml rename to experiments/generic/node_drain/node_drain_ansible_logic.yml diff --git a/experiments/generic/drain_node/drain_node_k8s_job.yml b/experiments/generic/node_drain/node_drain_k8s_job.yml similarity index 90% rename from experiments/generic/drain_node/drain_node_k8s_job.yml rename to experiments/generic/node_drain/node_drain_k8s_job.yml index 774ccf82e83..6aa73bb4c76 100644 --- a/experiments/generic/drain_node/drain_node_k8s_job.yml +++ b/experiments/generic/node_drain/node_drain_k8s_job.yml @@ -46,4 +46,4 @@ spec: value: '' command: ["/bin/bash"] - args: ["-c", "ansible-playbook ./experiments/generic/drain_node/drain_node_ansible_logic.yml -i /etc/ansible/hosts -vv; exit 0"] + args: ["-c", "ansible-playbook ./experiments/generic/node_drain/node_drain_ansible_logic.yml -i /etc/ansible/hosts -vv; exit 0"] diff --git a/experiments/generic/pod_network_corruption/README.md b/experiments/generic/pod_network_corruption/README.md index b8c2f4d7902..c7294bc6771 100644 --- a/experiments/generic/pod_network_corruption/README.md +++ b/experiments/generic/pod_network_corruption/README.md @@ -1,9 +1,15 @@ ## Experiment Metadata -| Type | Description | K8s Platform | -| ----- | ------------------------------------------------------------ | ------------ | -| Chaos | Inject network packet corruption into application pod | Any | - -## Experient documentation - -The corresponding documentation can be found [here](https://docs.litmuschaos.io/docs/pod-network-corruption/) + + + + + + + + + + + +
Name Description Documentation Link
Pod Network Corruption Inject network packet corruption into application pod + Here
+ + Name + Description + Documentation Link + + + Kafka Broker Disk Failure + Fail kafka broker disk/storage. This experiment causes forced detach of specified disk serving as storage for the Kafka broker pod + Here + + diff --git a/experiments/kafka/kafka-broker-pod-failure/README.md b/experiments/kafka/kafka-broker-pod-failure/README.md index 61dbf98dc36..275fa9f7cf5 100644 --- a/experiments/kafka/kafka-broker-pod-failure/README.md +++ b/experiments/kafka/kafka-broker-pod-failure/README.md @@ -1,55 +1,14 @@ -### Sample ChaosEngine manifest to execute kafka broker kill experiment - -- To override experiment defaults, add the ENV variables in `spec.components` of the experiment. - - ```yml - apiVersion: litmuschaos.io/v1alpha1 - kind: ChaosEngine - metadata: - name: kafka-chaos - namespace: default - spec: - appinfo: - appns: default - applabel: 'app=cp-kafka' - appkind: statefulset - chaosServiceAccount: kafka-sa - monitoring: false - experiments: - - name: kafka-broker-pod-failure - spec: - components: - # choose based on available kafka broker replicas - - name: KAFKA_REPLICATION_FACTOR - value: '3' - - # get via "kubectl get pods --show-labels -n " - - name: KAFKA_LABEL - value: 'app=cp-kafka' - - - name: KAFKA_NAMESPACE - value: 'default' - - # get via "kubectl get svc -n " - - name: KAFKA_SERVICE - value: 'kafka-cp-kafka-headless' - - # get via "kubectl get svc -n - - name: KAFKA_PORT - value: '9092' - - - name: ZOOKEEPER_NAMESPACE - value: 'default' - - # get via "kubectl get pods --show-labels -n " - - name: ZOOKEEPER_LABEL - value: 'app=cp-zookeeper' - - # get via "kubectl get svc -n - - name: ZOOKEEPER_SERVICE - value: 'kafka-cp-zookeeper-headless' - - # get via "kubectl get svc -n - - name: ZOOKEEPER_PORT - value: '2181' - ``` \ No newline at end of file +## Experiment Metadata + + + + + + + + + + + + +
Name Description Documentation Link
Kafka Broker Pod Failure Fail kafka leader-broker pods. This experiment causes (forced/graceful) pod failure of specific/random Kafka broker pods Here
diff --git a/experiments/openebs/openebs-pool-container-failure/README.md b/experiments/openebs/openebs-pool-container-failure/README.md index 291b7aa59c9..1445309789a 100644 --- a/experiments/openebs/openebs-pool-container-failure/README.md +++ b/experiments/openebs/openebs-pool-container-failure/README.md @@ -2,118 +2,15 @@ - - - - - - - - - - - -
Type Description Storage K8s Platform
Chaos Kill the pool container and check if gets scheduled again OPENEBS Any
- -## Entry-Criteria - -- Application services are accessible & pods are healthy -- Application writes are successful - -## Exit-Criteria - -- Application services are accessible & pods are healthy -- Data written prior to chaos is successfully retrieved/read -- Database consistency is maintained as per db integrity check utils -- Storage target pods are healthy - -## Notes - -- Typically used as a disruptive test, to cause loss of access to storage pool by killing it. -- The pool pod should start again and it should be healthy. - -## Associated Utils - -- [pumba/pod_failure_by_sigkill.yaml](/chaoslib/pumba/pod_failure_by_sigkill.yaml) -- [cstor_pool_kill.yml](/experiments/openebs/openebs-pool-container-failure/cstor_pool_kill.yml) - -### Procedure - -This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on storage pool. The litmus experiment fails the specified pool and thereby losing the access to volumes being created on it. - -After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume. - -Based on the value of env DATA_PERSISTENCE, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows: - - parameters.yml: | - blocksize: 4k - blockcount: 1024 - testfile: difiletest - -It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application. - -For percona-mysql, the following parameters are to be injected into configmap. - - parameters.yml: | - dbuser: root - dbpassword: k8sDem0 - dbname: tdb - -The configmap data will be utilised by litmus experiments as its variables while executing the scenario. Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos. - -## Litmusbook Environment Variables - -### Application - - - - - - - - - - - - - - - - - -
Parameter - Description
APP_NAMESPACE Namespace in which application pods are deployed
APP_LABEL Unique Labels in `key=value` format of application deployment
APP_PVC Name of persistent volume claim used for app's volume mounts
- -### Chaos - - - - + + - - - - -
Parameter Name Description Documentation Link
CHAOS_ITERATIONS The number of chaos iterations
- -### Health Checks - - - - - - - - - - - - - - - - -
Parameter - Description
LIVENESS_APP_NAMESPACE Namespace in which external liveness pods are deployed, if any
LIVENESS_APP_LABEL Unique Labels in `key=value` format for external liveness pod, if any
DATA_PERSISTENCE Data accessibility & integrity verification post recovery. To check against busybox set value: "busybox" and for percona, set value: "mysql"
\ No newline at end of file + OpenEBS Pool Container Failure + Kill the pool container and check if it gets scheduled again. This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on the storage pool. The litmus experiment fails the specified pool thereby losing the access to volume replicas created on it. + + Here + + + diff --git a/experiments/openebs/openebs-pool-pod-failure/README.md b/experiments/openebs/openebs-pool-pod-failure/README.md index 5df32a2a921..e0e31c74b55 100644 --- a/experiments/openebs/openebs-pool-pod-failure/README.md +++ b/experiments/openebs/openebs-pool-pod-failure/README.md @@ -2,120 +2,14 @@ - - - - - - - - - - - -
Type Description Storage K8s Platform
Chaos Kill the pool pod and check if gets scheduled again OPENEBS Any
- -## Entry-Criteria - -- Application services are accessible & pods are healthy -- Application writes are successful - -## Exit-Criteria - -- Application services are accessible & pods are healthy -- Data written prior to chaos is successfully retrieved/read -- Database consistency is maintained as per db integrity check utils -- Storage target pods are healthy - -## Notes - -- Typically used as a disruptive test, to cause loss of access to storage pool by killing it. -- The pool pod should start again and it should be healthy. - -## Associated Utils - -- [cstor_pool_delete.yml](/experiments/openebs/openebs-pool-container-failure/cstor_pool_delete.yml) -- [cstor_pool_health_check.yml](/experiments/openebs/openebs-pool-container-failure/cstor_pool_health_check.yml) -- [cstor_verify_pool_provisioning.yml](/experiments/openebs/openebs-pool-container-failure/cstor_verify_pool_provisioning.yml) -- [cstor_delete_and_verify_pool_deployment.yml](/experiments/openebs/openebs-pool-container-failure/cstor_delete_and_verify_pool_deployment.yml) - -### Procedure - -This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on storage pool. The litmus experiment fails the specified pool and thereby losing the access to volumes being created on it. - -After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume. - -Based on the value of env DATA_PERSISTENCE, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows: - - parameters.yml: | - blocksize: 4k - blockcount: 1024 - testfile: difiletest - -It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application. - -For percona-mysql, the following parameters are to be injected into configmap. - - parameters.yml: | - dbuser: root - dbpassword: k8sDem0 - dbname: tdb - -The configmap data will be utilised by litmus experiments as its variables while executing the scenario. Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos. - -## Litmusbook Environment Variables - -### Application - - - - - - - - - - - - - - - - - -
Parameter - Description
APP_NAMESPACE Namespace in which application pods are deployed
APP_LABEL Unique Labels in `key=value` format of application deployment
APP_PVC Name of persistent volume claim used for app's volume mounts
- -### Chaos - - - - + + - - - - -
Parameter Name Description Documentation Link
CHAOS_ITERATIONS The number of chaos iterations
- -### Health Checks - - - - - - - - - - - - - - - - -
Parameter - Description
LIVENESS_APP_NAMESPACE Namespace in which external liveness pods are deployed, if any
LIVENESS_APP_LABEL Unique Labels in `key=value` format for external liveness pod, if any
DATA_PERSISTENCE Data accessibility & integrity verification post recovery. To check against busybox set value: "busybox" and for percona, set value: "mysql"
\ No newline at end of file + OpenEBS Pool Pod Failure + Kill the pool pod and check if gets scheduled again. This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on storage pool. The litmus experiment fails the specified pool thereby losing the access to volumes created on it. + + Here + + diff --git a/experiments/openebs/openebs-target-container-failure/README.md b/experiments/openebs/openebs-target-container-failure/README.md index 7ea0b556461..6da5e0fdd69 100644 --- a/experiments/openebs/openebs-target-container-failure/README.md +++ b/experiments/openebs/openebs-target-container-failure/README.md @@ -2,108 +2,15 @@ - - - - + + + - - - - - -
Type Description Storage K8s Platform Name Description Documentation Link
Chaos Kill the cstor target/Jiva controller container and check if gets created again OPENEBS Any
- -## Entry-Criteria - -- Application services are accessible & pods are healthy -- Application writes are successful - -## Exit-Criteria - -- Application services are accessible & pods are healthy -- Data written prior to chaos is successfully retrieved/read -- Database consistency is maintained as per db integrity check utils -- Storage target pods are healthy - -### Notes - -- Typically used as a disruptive test, to cause loss of access to storage target by killing the containers. -- The container should be created again and it should be healthy. - -## Associated Utils -- [cstor_target_container_kill.yml](/experiments/openebs/openebs-target-container-failure/cstor_target_container_kill.yml) -- [jiva_controller_container_kill.yml](/experiments/openebs/openebs-target-container-failure/jiva_controller_container_kill.yml) -- [fetch_sc_and_provisioner.yml](/utils/apps/openebs/fetch_sc_and_provisioner.yml) -- [target_affinity_check.yml](/utils/apps/openebs/target_affinity_check.yml) - -## Litmus experiment Environment Variables - -### Application - - - - - - - - - - - - - - - - - - - - - -
Parameter - Description
APP_NAMESPACE Namespace in which application pods are deployed
APP_LABEL Unique Labels in `key=value` format of application deployment
APP_PVC Name of persistent volume claim used for app's volume mounts
DATA_PERSISTENCE Specify the application name against which data consistency has to be ensured. Example: busybox
- -### Chaos - - - - - - - - - - -
CHAOS_TYPE The type of chaos to be induced.
TARGET_CONTAINER The container against which chaos has to be induced.
- -### Procedure - -This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on OpenEBS data plane and control plane components. - -After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume. - -Based on the value of env `DATA_PERSISTENCE`, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows: - -```yml - parameters.yml: | - blocksize: 4k - blockcount: 1024 - testfile: difiletest -``` - -It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application. - -For percona-mysql, the following parameters are to be injected into configmap. - -```yml - parameters.yml: | - dbuser: root - dbpassword: k8sDemo - dbname: tbd -``` - -The configmap data will be utilised by litmus experiments as its variables while executing the scenario. - -Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos. + OpenEBS Target Container Failure + Kills the cstor target/Jiva controller container and checks if it gets created again. This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on the OpenEBS data plane controller. + + Here + + + diff --git a/experiments/openebs/openebs-target-network-delay/README.md b/experiments/openebs/openebs-target-network-delay/README.md index b359ad3de42..0ec35286f8a 100644 --- a/experiments/openebs/openebs-target-network-delay/README.md +++ b/experiments/openebs/openebs-target-network-delay/README.md @@ -2,127 +2,14 @@ - - - - - - - - - - - -
Type Description Storage K8s Platform
Chaos Inject delay in storage target and verify the application availability OPENEBS Any
- -## Entry-Criteria - -- Application services are accessible & pods are healthy -- Application writes are successful - -## Exit-Criteria - -- Application services are accessible & pods are healthy -- Data written prior to chaos is successfully retrieved/read -- Database consistency is maintained as per db integrity check utils -- Storage target pods are healthy - -## Notes - -- Typically used as a disruptive test, to cause loss of access to storage target by injecting network delay using pumba. -- The application pod should be healthy once it gets recovered. - -## Associated Utils - -- [cstor_target_network_delay.yaml](/experiments/openebs/openebs-target-network-delay/cstor_target_network_delay.yaml) -- [jiva_controller_network_delay.yaml](/experiments/openebs/openebs-target-network-delay/jiva_controller_network_delay.yaml) -- [fetch_sc_and_provisioner.yml](/utils/apps/openebs/fetch_sc_and_provisioner.yml) - -## Litmusbook Environment Variables - -### Application - - - - - - - - - - - - - - - - - -
Parameter - Description
APP_NAMESPACE Namespace in which application pods are deployed
APP_LABEL Unique Labels in `key=value` format of application deployment
APP_PVC Name of persistent volume claim used for app's volume mounts
- -### Chaos - - - - + + - - - - - - - - -
Parameter Name Description Documentation Link
NETWORK_DELAY The time interval in milliseconds
CHAOS_DURATION The time interval for chaos insertion
- -### Health Checks - - - - - - - - - - - - - - - - -
Parameter - Description
LIVENESS_APP_NAMESPACE Namespace in which external liveness pods are deployed, if any
LIVENESS_APP_LABEL Unique Labels in `key=value` format for external liveness pod, if any
DATA_PERSISTENCY Data accessibility & integrity verification post recovery (enabled, disabled)
- -### Procedure -​ -This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on OpenEBS data plane and control plane components. -​ -After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume. - -Based on the value of env DATA_PERSISTENCE, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows: - -```yml - parameters.yml: | - blocksize: 4k - blockcount: 1024 - testfile: difiletest -``` - -It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application. - -For percona-mysql, the following parameters are to be injected into configmap. - -```yml - parameters.yml: | - dbuser: root - dbpassword: k8sDem0 - dbname: tdb -``` - -The configmap data will be utilised by litmus experiments as its variables while executing the scenario. Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos. \ No newline at end of file + OpenEBS Target Network Delay + Injects network delay in storage target and verifies the application availability .This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on the OpenEBS data plane controller. + + Here + + diff --git a/experiments/openebs/openebs-target-network-loss/README.md b/experiments/openebs/openebs-target-network-loss/README.md index bccde806998..e91416ed639 100644 --- a/experiments/openebs/openebs-target-network-loss/README.md +++ b/experiments/openebs/openebs-target-network-loss/README.md @@ -2,127 +2,14 @@ - - - - - - - - - - - - - -
Type Description Storage Application K8s Platform
Chaos Inject n/w delay on storage target/controller OPENEBS Percona MySQL Any
- -## Entry-Criteria - -- Application services are accessible & pods are healthy -- Application writes are successful - -## Exit-Criteria - -- Application services are accessible & pods should not be in running state -- Storage target pods are healthy - -## Notes - -- Typically used as a disruptive test, to cause loss of access to storage by injecting prolonged network delay -- Tests Recovery workflows for the PV & data integrity post recovery - -## Associated Utils - -- [cstor_target_network_delay.yaml](/experiments/openebs/openebs-target-network-delay/cstor_target_network_delay.yaml) -- [jiva_controller_network_delay.yaml](/experiments/openebs/openebs-target-network-delay/jiva_controller_network_delay.yaml) -- [fetch_sc_and_provisioner.yml](/utils/apps/openebs/fetch_sc_and_provisioner.yml) - -## Litmus experiment Environment Variables - -### Application - - - - - - - - - - - - - - - - - -
Parameter - Description
APP_NAMESPACE Namespace in which application pods are deployed
APP_LABEL Unique Labels in `key=value` format of application deployment
APP_PVC Name of persistent volume claim used for app's volume mounts
- -### Chaos - - - - + + - - - - - - - - -
Parameter Name Description Documentation Link
NETWORK_DELAY Egress delay (in msec) on the target pod
CHAOS_DURATION Period (in sec)for which induced delay is maintained
- -### Health Checks - - - - - - - - - - - - - - - - -
Parameter - Description
LIVENESS_APP_NAMESPACE Namespace in which external liveness pods are deployed, if any
LIVENESS_APP_LABEL Unique Labels in `key=value` format for external liveness pod, if any
DATA_PERSISTENCE Data accessibility & integrity verification post recovery (enabled, disabled)
- -### Procedure - -This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on OpenEBS data plane and control plane components. - -After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume. - -Based on the value of env DATA_PERSISTENCE, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows: - -```yml - parameters.yml: | - blocksize: 4k - blockcount: 1024 - testfile: difiletest -``` - -It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application. - -For percona-mysql, the following parameters are to be injected into configmap. - -```yml - parameters.yml: | - dbuser: root - dbpassword: k8sDem0 - dbname: tdb -``` - -The configmap data will be utilised by litmus experiments as its variables while executing the scenario. Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos. + OpenEBS Target Network Loss + Inject network delay on storage target/controller .This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on the OpenEBS data plane controller. + + Here + + diff --git a/experiments/openebs/openebs-target-pod-failure/README.md b/experiments/openebs/openebs-target-pod-failure/README.md index 318116c8113..6d1877677be 100644 --- a/experiments/openebs/openebs-target-pod-failure/README.md +++ b/experiments/openebs/openebs-target-pod-failure/README.md @@ -2,97 +2,14 @@ - - - - + + + - - - - - -
Type Description Storage K8s Platform Name Description Documentation Link
Chaos Kill the cstor/jiva target/controller pod and check if gets created again OPENEBS Any
- -## Entry-Criteria - -- Application services are accessible & pods are healthy -- Application writes are successful - -## Exit-Criteria - -- Application services are accessible & pods are healthy -- Data written prior to chaos is successfully retrieved/read -- Database consistency is maintained as per db integrity check utils -- Storage target pods are healthy - -### Notes - -- Typically used as a disruptive test, to cause loss of access to storage target by killing the containers. -- The container should be created again and it should be healthy. - -## Associated Utils -- [cstor_target_failure.yaml](/experiments/openebs/openebs-target-pod-failure/cstor_target_failure.yaml) -- [jiva_controller_pod_failure.yaml](/experiments/openebs/openebs-target-pod-failure/jiva_controller_pod_failure.yaml) -- [fetch_cstor_target_pod.yml](/utils/apps/openebs/fetch_cstor_target_pod.yml) -- [fetch_jiva_controller_pod.yml](/utils/apps/openebs/fetch_jiva_controller_pod.yml) -- [fetch_sc_and_provisioner.yml](/utils/apps/openebs/fetch_sc_and_provisioner.yml) -- [target_affinity_check.yml](/utils/apps/openebs/target_affinity_check.yml) - -## Litmus experiment Environment Variables - -### Application - - - - - - - - - - - - - - - - - - - - - -
Parameter - Description
APP_NAMESPACE Namespace in which application pods are deployed
APP_LABEL Unique Labels in `key=value` format of application deployment
APP_PVC Name of persistent volume claim used for app's volume mounts
DATA_PERSISTENCE Specify the application name against which data consistency has to be ensured. Example: busybox
- -### Procedure - -This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on OpenEBS data plane and control plane components. - -After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume. - -Based on the value of env `DATA_PERSISTENCE`, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows: - -```yml - parameters.yml: | - blocksize: 4k - blockcount: 1024 - testfile: difiletest -``` - -It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application. - -For percona-mysql, the following parameters are to be injected into configmap. - -```yml - parameters.yml: | - dbuser: root - dbpassword: k8sDemo - dbname: tbd -``` - -The configmap data will be utilised by litmus experiments as its variables while executing the scenario. - -Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos. + OpenEBS Target Pod Failure + Kill the cstor/jiva target/controller pod and check if gets created again . This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on the OpenEBS data plane controller. + + Here + +