- - name: ZOOKEEPER_PORT
- value: '2181'
- ```
\ No newline at end of file
+## Experiment Metadata
+
+
+
+ Name |
+ Description |
+ Documentation Link |
+
+
+ Kafka Broker Pod Failure |
+ Fail kafka leader-broker pods. This experiment causes (forced/graceful) pod failure of specific/random Kafka broker pods |
+ Here |
+
+
diff --git a/experiments/openebs/openebs-pool-container-failure/README.md b/experiments/openebs/openebs-pool-container-failure/README.md
index 291b7aa59c9..1445309789a 100644
--- a/experiments/openebs/openebs-pool-container-failure/README.md
+++ b/experiments/openebs/openebs-pool-container-failure/README.md
@@ -2,118 +2,15 @@
- Type |
- Description |
- Storage |
- K8s Platform |
-
-
- Chaos |
- Kill the pool container and check if gets scheduled again |
- OPENEBS |
- Any |
-
-
-
-## Entry-Criteria
-
-- Application services are accessible & pods are healthy
-- Application writes are successful
-
-## Exit-Criteria
-
-- Application services are accessible & pods are healthy
-- Data written prior to chaos is successfully retrieved/read
-- Database consistency is maintained as per db integrity check utils
-- Storage target pods are healthy
-
-## Notes
-
-- Typically used as a disruptive test, to cause loss of access to storage pool by killing it.
-- The pool pod should start again and it should be healthy.
-
-## Associated Utils
-
-- [pumba/pod_failure_by_sigkill.yaml](/chaoslib/pumba/pod_failure_by_sigkill.yaml)
-- [cstor_pool_kill.yml](/experiments/openebs/openebs-pool-container-failure/cstor_pool_kill.yml)
-
-### Procedure
-
-This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on storage pool. The litmus experiment fails the specified pool and thereby losing the access to volumes being created on it.
-
-After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume.
-
-Based on the value of env DATA_PERSISTENCE, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows:
-
- parameters.yml: |
- blocksize: 4k
- blockcount: 1024
- testfile: difiletest
-
-It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application.
-
-For percona-mysql, the following parameters are to be injected into configmap.
-
- parameters.yml: |
- dbuser: root
- dbpassword: k8sDem0
- dbname: tdb
-
-The configmap data will be utilised by litmus experiments as its variables while executing the scenario. Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos.
-
-## Litmusbook Environment Variables
-
-### Application
-
-
-
- Parameter
- | Description |
-
-
- APP_NAMESPACE |
- Namespace in which application pods are deployed |
-
-
- APP_LABEL |
- Unique Labels in `key=value` format of application deployment |
-
-
- APP_PVC |
- Name of persistent volume claim used for app's volume mounts |
-
-
-
-### Chaos
-
-
-
- Parameter |
+ Name |
Description |
+ Documentation Link |
-
- CHAOS_ITERATIONS |
- The number of chaos iterations |
-
-
-
-### Health Checks
-
-
- Parameter
- | Description |
-
-
- LIVENESS_APP_NAMESPACE |
- Namespace in which external liveness pods are deployed, if any |
-
-
- LIVENESS_APP_LABEL |
- Unique Labels in `key=value` format for external liveness pod, if any |
-
-
- DATA_PERSISTENCE |
- Data accessibility & integrity verification post recovery. To check against busybox set value: "busybox" and for percona, set value: "mysql" |
-
-
\ No newline at end of file
+ OpenEBS Pool Container Failure |
+ Kill the pool container and check if it gets scheduled again. This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on the storage pool. The litmus experiment fails the specified pool thereby losing the access to volume replicas created on it.
+ |
+ Here |
+
+
+
diff --git a/experiments/openebs/openebs-pool-pod-failure/README.md b/experiments/openebs/openebs-pool-pod-failure/README.md
index 5df32a2a921..e0e31c74b55 100644
--- a/experiments/openebs/openebs-pool-pod-failure/README.md
+++ b/experiments/openebs/openebs-pool-pod-failure/README.md
@@ -2,120 +2,14 @@
- Type |
- Description |
- Storage |
- K8s Platform |
-
-
- Chaos |
- Kill the pool pod and check if gets scheduled again |
- OPENEBS |
- Any |
-
-
-
-## Entry-Criteria
-
-- Application services are accessible & pods are healthy
-- Application writes are successful
-
-## Exit-Criteria
-
-- Application services are accessible & pods are healthy
-- Data written prior to chaos is successfully retrieved/read
-- Database consistency is maintained as per db integrity check utils
-- Storage target pods are healthy
-
-## Notes
-
-- Typically used as a disruptive test, to cause loss of access to storage pool by killing it.
-- The pool pod should start again and it should be healthy.
-
-## Associated Utils
-
-- [cstor_pool_delete.yml](/experiments/openebs/openebs-pool-container-failure/cstor_pool_delete.yml)
-- [cstor_pool_health_check.yml](/experiments/openebs/openebs-pool-container-failure/cstor_pool_health_check.yml)
-- [cstor_verify_pool_provisioning.yml](/experiments/openebs/openebs-pool-container-failure/cstor_verify_pool_provisioning.yml)
-- [cstor_delete_and_verify_pool_deployment.yml](/experiments/openebs/openebs-pool-container-failure/cstor_delete_and_verify_pool_deployment.yml)
-
-### Procedure
-
-This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on storage pool. The litmus experiment fails the specified pool and thereby losing the access to volumes being created on it.
-
-After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume.
-
-Based on the value of env DATA_PERSISTENCE, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows:
-
- parameters.yml: |
- blocksize: 4k
- blockcount: 1024
- testfile: difiletest
-
-It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application.
-
-For percona-mysql, the following parameters are to be injected into configmap.
-
- parameters.yml: |
- dbuser: root
- dbpassword: k8sDem0
- dbname: tdb
-
-The configmap data will be utilised by litmus experiments as its variables while executing the scenario. Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos.
-
-## Litmusbook Environment Variables
-
-### Application
-
-
-
- Parameter
- | Description |
-
-
- APP_NAMESPACE |
- Namespace in which application pods are deployed |
-
-
- APP_LABEL |
- Unique Labels in `key=value` format of application deployment |
-
-
- APP_PVC |
- Name of persistent volume claim used for app's volume mounts |
-
-
-
-### Chaos
-
-
-
- Parameter |
+ Name |
Description |
+ Documentation Link |
-
- CHAOS_ITERATIONS |
- The number of chaos iterations |
-
-
-
-### Health Checks
-
-
- Parameter
- | Description |
-
-
- LIVENESS_APP_NAMESPACE |
- Namespace in which external liveness pods are deployed, if any |
-
-
- LIVENESS_APP_LABEL |
- Unique Labels in `key=value` format for external liveness pod, if any |
-
-
- DATA_PERSISTENCE |
- Data accessibility & integrity verification post recovery. To check against busybox set value: "busybox" and for percona, set value: "mysql" |
-
-
\ No newline at end of file
+ OpenEBS Pool Pod Failure |
+ Kill the pool pod and check if gets scheduled again. This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on storage pool. The litmus experiment fails the specified pool thereby losing the access to volumes created on it.
+ |
+ Here |
+
+
diff --git a/experiments/openebs/openebs-target-container-failure/README.md b/experiments/openebs/openebs-target-container-failure/README.md
index 7ea0b556461..6da5e0fdd69 100644
--- a/experiments/openebs/openebs-target-container-failure/README.md
+++ b/experiments/openebs/openebs-target-container-failure/README.md
@@ -2,108 +2,15 @@
- Type |
- Description |
- Storage |
- K8s Platform |
+ Name |
+ Description |
+ Documentation Link |
- Chaos |
- Kill the cstor target/Jiva controller container and check if gets created again |
- OPENEBS |
- Any |
-
-
-
-## Entry-Criteria
-
-- Application services are accessible & pods are healthy
-- Application writes are successful
-
-## Exit-Criteria
-
-- Application services are accessible & pods are healthy
-- Data written prior to chaos is successfully retrieved/read
-- Database consistency is maintained as per db integrity check utils
-- Storage target pods are healthy
-
-### Notes
-
-- Typically used as a disruptive test, to cause loss of access to storage target by killing the containers.
-- The container should be created again and it should be healthy.
-
-## Associated Utils
-- [cstor_target_container_kill.yml](/experiments/openebs/openebs-target-container-failure/cstor_target_container_kill.yml)
-- [jiva_controller_container_kill.yml](/experiments/openebs/openebs-target-container-failure/jiva_controller_container_kill.yml)
-- [fetch_sc_and_provisioner.yml](/utils/apps/openebs/fetch_sc_and_provisioner.yml)
-- [target_affinity_check.yml](/utils/apps/openebs/target_affinity_check.yml)
-
-## Litmus experiment Environment Variables
-
-### Application
-
-
-
- Parameter
- | Description |
-
-
- APP_NAMESPACE |
- Namespace in which application pods are deployed |
-
-
- APP_LABEL |
- Unique Labels in `key=value` format of application deployment |
-
-
- APP_PVC |
- Name of persistent volume claim used for app's volume mounts |
-
-
- DATA_PERSISTENCE |
- Specify the application name against which data consistency has to be ensured. Example: busybox |
-
-
-
-### Chaos
-
-
-
- CHAOS_TYPE |
- The type of chaos to be induced. |
-
-
- TARGET_CONTAINER |
- The container against which chaos has to be induced. |
-
-
-
-### Procedure
-
-This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on OpenEBS data plane and control plane components.
-
-After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume.
-
-Based on the value of env `DATA_PERSISTENCE`, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows:
-
-```yml
- parameters.yml: |
- blocksize: 4k
- blockcount: 1024
- testfile: difiletest
-```
-
-It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application.
-
-For percona-mysql, the following parameters are to be injected into configmap.
-
-```yml
- parameters.yml: |
- dbuser: root
- dbpassword: k8sDemo
- dbname: tbd
-```
-
-The configmap data will be utilised by litmus experiments as its variables while executing the scenario.
-
-Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos.
+ OpenEBS Target Container Failure |
+ Kills the cstor target/Jiva controller container and checks if it gets created again. This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on the OpenEBS data plane controller.
+ |
+ Here |
+
+
+
diff --git a/experiments/openebs/openebs-target-network-delay/README.md b/experiments/openebs/openebs-target-network-delay/README.md
index b359ad3de42..0ec35286f8a 100644
--- a/experiments/openebs/openebs-target-network-delay/README.md
+++ b/experiments/openebs/openebs-target-network-delay/README.md
@@ -2,127 +2,14 @@
- Type |
- Description |
- Storage |
- K8s Platform |
-
-
- Chaos |
- Inject delay in storage target and verify the application availability |
- OPENEBS |
- Any |
-
-
-
-## Entry-Criteria
-
-- Application services are accessible & pods are healthy
-- Application writes are successful
-
-## Exit-Criteria
-
-- Application services are accessible & pods are healthy
-- Data written prior to chaos is successfully retrieved/read
-- Database consistency is maintained as per db integrity check utils
-- Storage target pods are healthy
-
-## Notes
-
-- Typically used as a disruptive test, to cause loss of access to storage target by injecting network delay using pumba.
-- The application pod should be healthy once it gets recovered.
-
-## Associated Utils
-
-- [cstor_target_network_delay.yaml](/experiments/openebs/openebs-target-network-delay/cstor_target_network_delay.yaml)
-- [jiva_controller_network_delay.yaml](/experiments/openebs/openebs-target-network-delay/jiva_controller_network_delay.yaml)
-- [fetch_sc_and_provisioner.yml](/utils/apps/openebs/fetch_sc_and_provisioner.yml)
-
-## Litmusbook Environment Variables
-
-### Application
-
-
-
- Parameter
- | Description |
-
-
- APP_NAMESPACE |
- Namespace in which application pods are deployed |
-
-
- APP_LABEL |
- Unique Labels in `key=value` format of application deployment |
-
-
- APP_PVC |
- Name of persistent volume claim used for app's volume mounts |
-
-
-
-### Chaos
-
-
-
- Parameter |
+ Name |
Description |
+ Documentation Link |
-
- NETWORK_DELAY |
- The time interval in milliseconds |
-
-
- CHAOS_DURATION |
- The time interval for chaos insertion |
-
-
-
-### Health Checks
-
-
- Parameter
- | Description |
-
-
- LIVENESS_APP_NAMESPACE |
- Namespace in which external liveness pods are deployed, if any |
-
-
- LIVENESS_APP_LABEL |
- Unique Labels in `key=value` format for external liveness pod, if any |
-
-
- DATA_PERSISTENCY |
- Data accessibility & integrity verification post recovery (enabled, disabled) |
-
-
-
-### Procedure
-
-This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on OpenEBS data plane and control plane components.
-
-After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume.
-
-Based on the value of env DATA_PERSISTENCE, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows:
-
-```yml
- parameters.yml: |
- blocksize: 4k
- blockcount: 1024
- testfile: difiletest
-```
-
-It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application.
-
-For percona-mysql, the following parameters are to be injected into configmap.
-
-```yml
- parameters.yml: |
- dbuser: root
- dbpassword: k8sDem0
- dbname: tdb
-```
-
-The configmap data will be utilised by litmus experiments as its variables while executing the scenario. Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos.
\ No newline at end of file
+ OpenEBS Target Network Delay |
+ Injects network delay in storage target and verifies the application availability .This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on the OpenEBS data plane controller.
+ |
+ Here |
+
+
diff --git a/experiments/openebs/openebs-target-network-loss/README.md b/experiments/openebs/openebs-target-network-loss/README.md
index bccde806998..e91416ed639 100644
--- a/experiments/openebs/openebs-target-network-loss/README.md
+++ b/experiments/openebs/openebs-target-network-loss/README.md
@@ -2,127 +2,14 @@
- Type |
- Description |
- Storage |
- Application |
- K8s Platform |
-
-
- Chaos |
- Inject n/w delay on storage target/controller |
- OPENEBS |
- Percona MySQL |
- Any |
-
-
-
-## Entry-Criteria
-
-- Application services are accessible & pods are healthy
-- Application writes are successful
-
-## Exit-Criteria
-
-- Application services are accessible & pods should not be in running state
-- Storage target pods are healthy
-
-## Notes
-
-- Typically used as a disruptive test, to cause loss of access to storage by injecting prolonged network delay
-- Tests Recovery workflows for the PV & data integrity post recovery
-
-## Associated Utils
-
-- [cstor_target_network_delay.yaml](/experiments/openebs/openebs-target-network-delay/cstor_target_network_delay.yaml)
-- [jiva_controller_network_delay.yaml](/experiments/openebs/openebs-target-network-delay/jiva_controller_network_delay.yaml)
-- [fetch_sc_and_provisioner.yml](/utils/apps/openebs/fetch_sc_and_provisioner.yml)
-
-## Litmus experiment Environment Variables
-
-### Application
-
-
-
- Parameter
- | Description |
-
-
- APP_NAMESPACE |
- Namespace in which application pods are deployed |
-
-
- APP_LABEL |
- Unique Labels in `key=value` format of application deployment |
-
-
- APP_PVC |
- Name of persistent volume claim used for app's volume mounts |
-
-
-
-### Chaos
-
-
-
- Parameter |
+ Name |
Description |
+ Documentation Link |
-
- NETWORK_DELAY |
- Egress delay (in msec) on the target pod |
-
-
- CHAOS_DURATION |
- Period (in sec)for which induced delay is maintained |
-
-
-
-### Health Checks
-
-
- Parameter
- | Description |
-
-
- LIVENESS_APP_NAMESPACE |
- Namespace in which external liveness pods are deployed, if any |
-
-
- LIVENESS_APP_LABEL |
- Unique Labels in `key=value` format for external liveness pod, if any |
-
-
- DATA_PERSISTENCE |
- Data accessibility & integrity verification post recovery (enabled, disabled) |
-
-
-
-### Procedure
-
-This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on OpenEBS data plane and control plane components.
-
-After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume.
-
-Based on the value of env DATA_PERSISTENCE, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows:
-
-```yml
- parameters.yml: |
- blocksize: 4k
- blockcount: 1024
- testfile: difiletest
-```
-
-It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application.
-
-For percona-mysql, the following parameters are to be injected into configmap.
-
-```yml
- parameters.yml: |
- dbuser: root
- dbpassword: k8sDem0
- dbname: tdb
-```
-
-The configmap data will be utilised by litmus experiments as its variables while executing the scenario. Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos.
+ OpenEBS Target Network Loss |
+ Inject network delay on storage target/controller .This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on the OpenEBS data plane controller.
+ |
+ Here |
+
+
diff --git a/experiments/openebs/openebs-target-pod-failure/README.md b/experiments/openebs/openebs-target-pod-failure/README.md
index 318116c8113..6d1877677be 100644
--- a/experiments/openebs/openebs-target-pod-failure/README.md
+++ b/experiments/openebs/openebs-target-pod-failure/README.md
@@ -2,97 +2,14 @@
- Type |
- Description |
- Storage |
- K8s Platform |
+ Name |
+ Description |
+ Documentation Link |
- Chaos |
- Kill the cstor/jiva target/controller pod and check if gets created again |
- OPENEBS |
- Any |
-
-
-
-## Entry-Criteria
-
-- Application services are accessible & pods are healthy
-- Application writes are successful
-
-## Exit-Criteria
-
-- Application services are accessible & pods are healthy
-- Data written prior to chaos is successfully retrieved/read
-- Database consistency is maintained as per db integrity check utils
-- Storage target pods are healthy
-
-### Notes
-
-- Typically used as a disruptive test, to cause loss of access to storage target by killing the containers.
-- The container should be created again and it should be healthy.
-
-## Associated Utils
-- [cstor_target_failure.yaml](/experiments/openebs/openebs-target-pod-failure/cstor_target_failure.yaml)
-- [jiva_controller_pod_failure.yaml](/experiments/openebs/openebs-target-pod-failure/jiva_controller_pod_failure.yaml)
-- [fetch_cstor_target_pod.yml](/utils/apps/openebs/fetch_cstor_target_pod.yml)
-- [fetch_jiva_controller_pod.yml](/utils/apps/openebs/fetch_jiva_controller_pod.yml)
-- [fetch_sc_and_provisioner.yml](/utils/apps/openebs/fetch_sc_and_provisioner.yml)
-- [target_affinity_check.yml](/utils/apps/openebs/target_affinity_check.yml)
-
-## Litmus experiment Environment Variables
-
-### Application
-
-
-
- Parameter
- | Description |
-
-
- APP_NAMESPACE |
- Namespace in which application pods are deployed |
-
-
- APP_LABEL |
- Unique Labels in `key=value` format of application deployment |
-
-
- APP_PVC |
- Name of persistent volume claim used for app's volume mounts |
-
-
- DATA_PERSISTENCE |
- Specify the application name against which data consistency has to be ensured. Example: busybox |
-
-
-
-### Procedure
-
-This scenario validates the behaviour of application and OpenEBS persistent volumes in the amidst of chaos induced on OpenEBS data plane and control plane components.
-
-After injecting the chaos into the component specified via environmental variable, litmus experiment observes the behaviour of corresponding OpenEBS PV and the application which consumes the volume.
-
-Based on the value of env `DATA_PERSISTENCE`, the corresponding data consistency util will be executed. At present only busybox and percona-mysql are supported. Along with specifying env in the litmus experiment, user needs to pass name for configmap and the data consistency specific parameters required via configmap in the format as follows:
-
-```yml
- parameters.yml: |
- blocksize: 4k
- blockcount: 1024
- testfile: difiletest
-```
-
-It is recommended to pass test-name for configmap and mount the corresponding configmap as volume in the litmus pod. The above snippet holds the parameters required for validation data consistency in busybox application.
-
-For percona-mysql, the following parameters are to be injected into configmap.
-
-```yml
- parameters.yml: |
- dbuser: root
- dbpassword: k8sDemo
- dbname: tbd
-```
-
-The configmap data will be utilised by litmus experiments as its variables while executing the scenario.
-
-Based on the data provided, litmus checks if the data is consistent after recovering from induced chaos.
+ OpenEBS Target Pod Failure |
+ Kill the cstor/jiva target/controller pod and check if gets created again . This scenario validates the behaviour of application and OpenEBS persistent volumes when chaos is induced on the OpenEBS data plane controller.
+ |
+ Here |
+
+