Skip to content

Commit

Permalink
Merge branch 'develop' into update-version
Browse files Browse the repository at this point in the history
  • Loading branch information
avishnu authored Sep 6, 2023
2 parents 2138160 + 2944542 commit 81e046e
Show file tree
Hide file tree
Showing 8 changed files with 218 additions and 52 deletions.
5 changes: 3 additions & 2 deletions SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
* [Node Cordon](reference/node-cordon.md)
* [Node Drain](reference/node-drain.md)
* [Volume Snapshots](reference/snapshot.md)
* [Volume Restore from Snapshot](reference/snapshot-restore.md)

## Additional Information

Expand All @@ -38,12 +39,12 @@
* [Scale etcd](quickstart/scale-etcd.md)

## Platform Support
* [Mayastor Installation on MicroK8s](platform-support/microk8s-installation.md)

* [Mayastor Installation on MicroK8s](platform-support/microk8s-installation.md)

## Troubleshooting

* [Basic Troubleshooting](quickstart/troubleshooting.md)
* [Known Limitations](quickstart/known-limitations.md)
* [Known Issues](quickstart/known-issues.md)
* [FAQs](quickstart/faqs.md)
* [FAQs](quickstart/faqs.md)
31 changes: 15 additions & 16 deletions quickstart/configure-mayastor.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Using one or more the following examples as templates, create the required type
{% tab title="Example DiskPool definition" %}
```text
cat <<EOF | kubectl create -f -
apiVersion: "openebs.io/v1alpha1"
apiVersion: "openebs.io/v1beta1"
kind: DiskPool
metadata:
name: pool-on-node-1
Expand All @@ -61,7 +61,7 @@ EOF

{% tab title="YAML" %}
```text
apiVersion: "openebs.io/v1alpha1"
apiVersion: "openebs.io/v1beta1"
kind: DiskPool
metadata:
name: INSERT_POOL_NAME_HERE
Expand All @@ -77,6 +77,12 @@ spec:
When using the examples given as guides to creating your own pools, remember to replace the values for the fields "metadata.name", "spec.node" and "spec.disks" as appropriate to your cluster's intended configuration. Note that whilst the "disks" parameter accepts an array of values, the current version of Mayastor supports only one disk device per pool.
{% endhint %}

{% hint style="note" %}

Existing schemas in Custom Resource (CR) definitions (in older versions) will be updated from v1alpha1 to v1beta1 after upgrading to Mayastor 2.4 and above. To resolve errors encountered pertaining to the upgrade, click [here](quickstart\troubleshooting.md).

{% endhint %}

### Verify Pool Creation and Status

The status of DiskPools may be determined by reference to their cluster CRs. Available, healthy pools should report their State as `online`. Verify that the expected number of pools have been created and that they are online.
Expand All @@ -91,24 +97,17 @@ kubectl get dsp -n mayastor
{% tab title="Example Output" %}
```text
NAME NODE STATE POOL_STATUS CAPACITY USED AVAILABLE
pool-on-node-1 node-1-14944 Online Online 10724835328 0 10724835328
pool-on-node-2 node-2-14944 Online Online 10724835328 0 10724835328
pool-on-node-3 node-3-14944 Online Online 10724835328 0 10724835328
pool-on-node-1 node-1-14944 Created Online 10724835328 0 10724835328
pool-on-node-2 node-2-14944 Created Online 10724835328 0 10724835328
pool-on-node-3 node-3-14944 Created Online 10724835328 0 10724835328
```
{% endtab %}
{% endtabs %}

{% hint style="info" %}

Mayastor-2.0.1 adds two new fields to the DiskPool operator YAML:
1. **status.cr_state**: The `cr_state`, which can either be _creating, created or terminating_, will be used by the operator to reconcile with the CR.
The `cr_state` is set to `Terminating` when a CR delete event is received.
2. **status.pool_status**: The `pool_status` represents the status of the respective control plane pool resource. 
{% endhint %}

Pool configuration and state information can also be obtained by using the [Mayastor kubectl plugin](https://mayastor.gitbook.io/introduction/reference/kubectl-plugin)



----------


## Create Mayastor StorageClass\(s\)

Mayastor dynamically provisions PersistentVolumes \(PVs\) based on StorageClass definitions created by the user. Parameters of the definition are used to set the characteristics and behaviour of its associated PVs. For a detailed description of these parameters see [storage class parameter description](https://mayastor.gitbook.io/introduction/reference/storage-class-parameters). Most importantly StorageClass definition is used to control the level of data protection afforded to it \(that is, the number of synchronous data replicas which are maintained, for purposes of redundancy\). It is possible to create any number of StorageClass definitions, spanning all permitted parameter permutations.
Expand Down
23 changes: 11 additions & 12 deletions quickstart/performance-tips.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,17 +93,16 @@ cat /sys/devices/system/cpu/isolated
{% endtab %}
{% endtabs %}

### Deploy Mayastor daemonset

Edit the `mayastor-daemonset.yaml` file and set the `-l` parameter of mayastor to specify CPU cores that Mayastor reactors should run on. In the following example we run mayastor on the third and fourth CPU core:

```yaml
...
containers:
- name: mayastor
...
args:
...
- "-l3,4"
### Update mayastor helm chart for CPU core specification

To allot specific CPU cores for Mayastor's reactors, follow these steps:

1. Ensure that you have the Mayastor kubectl plugin installed, matching the version of your Mayastor Helm chart deployment ([releases](https://github.com/openebs/mayastor/releases)). You can find installation instructions in the [Mayastor kubectl plugin documentation]( https://mayastor.gitbook.io/introduction/advanced-operations/kubectl-plugin).

2. Execute the following command to update Mayastor's configuration. Replace `<namespace>` with the appropriate Kubernetes namespace where Mayastor is deployed.

```
kubectl mayastor upgrade -n <namespace> --set-args 'io_engine.coreList={3,4}'
```

In the above command, `io_engine.coreList={3,4}` specifies that Mayastor's reactors should operate on the third and fourth CPU cores.
17 changes: 17 additions & 0 deletions quickstart/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -295,3 +295,20 @@ Thread 1 (Thread 0x7f782559f040 (LWP 56)):
{% endtab %}
{% endtabs %}

-------------

## Diskpool behaviour

The below behaviour may be encountered while uprading from older releases to Mayastor 2.4 release and above.

### Get Dsp

Running `kubectl get dsp -n mayastor` could result in the error due to the `v1alpha1` schema in the discovery cache. To resolve this, run the command `kubectl get diskpools.openebs.io -n mayastor`. After this kubectl discovery cache will be updated with `v1beta1` object for dsp.

### Create API

When creating a Disk Pool with `kubectl create -f dsp.yaml`, you might encounter an error related to `v1alpha1` CR definitions. To resolve this, ensure your CR definition is updated to `v1beta1` in the YAML file (for example, `apiVersion: openebs.io/v1beta1`).

{% hint style="note" %}
You can validate the schema changes by executing `kubectl get crd diskpools.openebs.io`.
{% endhint %}
50 changes: 31 additions & 19 deletions reference/call-home.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
# Call-home metrics

By default, Mayastor collects some basic information related to the number and scale of user-deployed instances. The collected data is anonymous and is encrypted at rest. This data is used to understand storage usage trends, which in turn helps maintainers prioritize their contributions to maximize the benefit to the community as a whole.
## Mayastor default information collection

{% hint style="info" %}
By default, Mayastor collects basic information related to the number and scale of user-deployed instances. The collected data is anonymous and is encrypted at rest. This data is used to understand storage usage trends, which in turn helps maintainers prioritize their contributions to maximize the benefit to the community as a whole.

{% hint style="info" %}
No user-identifiable information, hostnames, passwords, or volume data are collected. **ONLY** the below-mentioned information is collected from the cluster.

{% endhint %}


A summary of the information collected is given below.
A summary of the information collected is given below:

| **Cluster information** |
| :--- |
Expand All @@ -20,41 +19,54 @@ A summary of the information collected is given below.
|**Deploy namespace**: This is a SHA-256 hashed value of the name of the Kubernetes namespace where Mayastor Helm chart is deployed.|
|**Storage node count**: This is the number of nodes on which the Mayastor I/O engine is scheduled.|




|**Pool information**|
| :--- |
|**Pool count**: This is the number of Mayastor DiskPools in your cluster.|
|**Pool maximum size**: This is the capacity of the Mayastor DiskPool with the highest capacity.|
|**Pool minimum size**: This is the capacity of the Mayastor DiskPool with the lowest capacity.|
|**Pool mean size**: This is the average capacity of the Mayastor DiskPools in your cluster.|
|**Pool capacity percentiles**: This calculates and returns the capacity distribution of Mayastor DiskPools for the 50th, 75th and the 90th percentiles.|




| **Pools created**: This is the number of successful pool creation attempts.|
| **Pools deleted**: This is the number of successful pool deletion attempts.|

|**Volume information**|
| :--- |
|**Volume count**: This is the number of Mayastor Volumes in your cluster.|
|**Volume minimum size**: This is the capacity of the Mayastor Volume with the lowest capacity.|
|**Volume mean size**: This is the average capacity of the Mayastor Volumes in your cluster.|
|**Volume capacity percentiles**: This calculates and returns the capacity distribution of Mayastor Volumes for the 50th, 75th and the 90th percentiles.|



| **Volumes created**: This is the number of successful volume creation attempts.|
| **Volumes deleted**: This is the number of successful volume deletion attempts. |

|**Replica Information**|
| :--- |
|**Replica count**: This is the number of Mayastor Volume replicas in your cluster.|
|**Average replica count per volume**: This is the average number of replicas each Mayastor Volume has in your cluster.|


### How to disable the collection of usage data
### Storage location of collected data

The collected information is stored on behalf of the OpenEBS project by DataCore Software Inc. in data centers located in Texas, USA.

----

## Disable specific data collection

To disable collection of **usage data** or generation of **events**, the following Helm command, along with the flag, can either be executed during installation or can be re-executed post-installation.

### Disable collection of usage data

To disable the collection of data metrics from the cluster, add the following flag to the Helm install command.

```
--set obs.callhome.enabled=false
```

### Disable generation of events data

To disable the collection of data metrics from the cluster, add `--set obs.callhome.enabled=false` flag to the Helm install command. The Helm command, along with the flag, can either be executed during installation or can be re-executed post-installation.
When eventing is enabled, NATS pods are created to gather various events from the cluster, including statistical metrics such as *pools created*. To deactivate eventing within the cluster, include the following flag in the Helm installation command.

### Where is the collected data stored?
```
--set eventing.enabled=false
```

The collected information is stored on behalf of the OpenEBS project by DataCore Software Inc. in data centers located in Texas, USA.
25 changes: 23 additions & 2 deletions reference/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

The Mayastor pool metrics exporter runs as a sidecar container within every io-engine pod and exposes pool usage metrics in Prometheus format. These metrics are exposed on port 9502 using an HTTP endpoint /metrics and are refreshed every five minutes.

### Supported metrics
### Supported pool metrics

| Name | Type | Unit | Description |
| :--- | :--- | :--- | :--- |
Expand All @@ -31,7 +31,27 @@ disk_pool_committed_size_bytes{node="worker-0", name="mayastor-disk-pool"} 96636
{% endtab %}


### Integrating exporter with Prometheus monitoring stack

--------

## Stats exporter metrics

When [eventing](reference/call-home.md) is activated, the stats exporter operates within the **obs-callhome-stats** container, located in the **callhome** pod. The statistics are made accessible through an HTTP endpoint at port `9090`, specifically using the `/stats` route.


### Supported stats metrics

| Name | Type | Unit | Description |
| :--- | :--- | :--- | :--- |
| pools_created | Guage | Integer | Total successful pool creation attempts |
| pools_deleted | Guage | Integer | Total successful pool deletion attempts |
| volumes_created | Guage | Integer | Total successful volume creation attemtps |
| volumes_deleted | Guage | Integer | Total successful volume deletion attempts |


----

## Integrating exporter with Prometheus monitoring stack

1. To install, add the Prometheus-stack helm chart and update the repo.

Expand Down Expand Up @@ -73,6 +93,7 @@ spec:
Upon successful integration of the exporter with the Prometheus stack, the metrics will be available on the port 9090 and HTTP endpoint /metrics.
{% endhint %}

---

## CSI metrics exporter

Expand Down
110 changes: 110 additions & 0 deletions reference/snapshot-restore.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
---
Title: Volume Restore from a Snapshot
---

Volume restore from an existing snapshot will create an exact replica of a storage volume captured at a specific point in time. They serve as an essential tool for data protection, recovery, and efficient management in Kubernetes environments. This article provides a step-by-step guide on how to create a volume restore.

## Prerequisites

### Step 1: Create a storage class

To begin, you'll need to create a StorageClass that defines the properties of the snapshot to be restored. Refer to [Storage Class Parameters](reference\storage-class-parameters.md) for more details. Use the following command to create the StorageClass:

{% hint style="info" %}
thin: "true" and repl: "1" is the only supported combination.
{% endhint %}

{% tabs %}
{% tab title="Command" %}
```text
cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-1-restore
parameters:
ioTimeout: "30"
protocol: nvmf
repl: "1"
thin: "true"
provisioner: io.openebs.csi-mayastor
EOF
```
{% endtab %}
{% tab title="YAML" %}
```text
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-1-restore
parameters:
ioTimeout: "30"
protocol: nvmf
repl: "1"
thin: "true"
provisioner: io.openebs.csi-mayastor
```
{% endtab %}
{% endtabs %}

> Note the name of the StorageClass, which, in this example, is **mayastor-1-restore**.

### Step 2: Create a snapshot

You need to create a volume snapshot before proceeding with the restore. Follow the steps outlined in [this guide](quickstart/snapshot.md) to create a volume snapshot.

> Note the snapshot's name, for example, **pvc-snap-1**.
-------------------

## Create a volume restore of the existing snapshot

After creating a snapshot, you can create a PersistentVolumeClaim (PVC) from it to generate the volume restore. Use the following command:

{% tabs %}
{% tab title="Command" %}
```text
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: restore-pvc //add a name for your new volume
spec:
storageClassName: mayastor-1-restore //add your storage class name
dataSource:
name: pvc-snap-1 //add your volumeSnapshot name
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
```
{% endtab %}
{% tab title="YAML" %}
```text
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: restore-pvc //add a name for your new volume
spec:
storageClassName: mayastor-1-restore //add your storage class name
dataSource:
name: pvc-snap-1 //add your volumeSnapshot name
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
{% endtab %}
{% endtabs %}


By running this command, you create a new PVC named `restore-pvc` based on the specified snapshot. The restored volume will have the same data and configuration as the original volume had at the time of the snapshot.

Loading

0 comments on commit 81e046e

Please sign in to comment.