Skip to content

Commit

Permalink
en: support PD micro service (#2517)
Browse files Browse the repository at this point in the history
  • Loading branch information
qiancai authored Mar 25, 2024
1 parent 154c622 commit 45a17bb
Show file tree
Hide file tree
Showing 8 changed files with 238 additions and 10 deletions.
83 changes: 83 additions & 0 deletions en/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -215,6 +215,29 @@ To mount multiple PVs for TiCDC:
</div>
<div label="PD microservices">
To mount multiple PVs for PD microservices (taking the `tso` microservice as an example):

> **Note:**
>
> Starting from v8.0.0, PD supports the [microservice mode](https://docs.pingcap.com/tidb/dev/pd-microservices) (experimental).

```yaml
pd:
mode: "ms"
pdms:
- name: "tso"
config: |
[log.file]
filename = "/pdms/log/tso.log"
storageVolumes:
- name: log
storageSize: "10Gi"
mountPath: "/pdms/log"
```

</div>
</SimpleTab>

> **Note:**
Expand Down Expand Up @@ -255,6 +278,30 @@ The deployed cluster topology by default has three PD Pods, three TiKV Pods, and
>
> If the number of Kubernetes cluster nodes is less than three, one PD Pod goes to the Pending state, and neither TiKV Pods nor TiDB Pods are created. When the number of nodes in the Kubernetes cluster is less than three, to start the TiDB cluster, you can reduce the number of PD Pods in the default deployment to `1`.

#### Enable PD microservices

> **Note:**
>
> Starting from v8.0.0, PD supports the [microservice mode](https://docs.pingcap.com/tidb/dev/pd-microservices) (experimental).

To enable PD microservices in your cluster, configure `spec.pd.mode` and `spec.pdms` in the `${cluster_name}/tidb-cluster.yaml` file:

```yaml
spec:
pd:
mode: "ms"
pdms:
- name: "tso"
baseImage: pingcap/pd
replicas: 2
- name: "scheduling"
baseImage: pingcap/pd
replicas: 1
```

- `spec.pd.mode` is used to enable or disable PD microservices. Setting it to `"ms"` enables PD microservices, while setting it to `""` or removing this field disables PD microservices.
- `spec.pdms.config` is used to configure PD microservices, and the specific configuration parameters are the same as `spec.pd.config`. To get all the parameters that can be configured for PD microservices, see the [PD configuration file](https://docs.pingcap.com/tidb/stable/pd-configuration-file).

#### Enable TiProxy

The deployment method is the same as that of PD. In addition, you need to modify `spec.tiproxy` to manually specify the number of TiProxy components.
Expand Down Expand Up @@ -390,6 +437,42 @@ For all the configurable parameters of PD, refer to [PD Configuration File](http
> - If you deploy your TiDB cluster using CR, make sure that `Config: {}` is set, no matter you want to modify `config` or not. Otherwise, PD components might not be started successfully. This step is meant to be compatible with `Helm` deployment.
> - After the cluster is started for the first time, some PD configuration items are persisted in etcd. The persisted configuration in etcd takes precedence over that in PD. Therefore, after the first start, you cannot modify some PD configuration using parameters. You need to dynamically modify the configuration using SQL statements, pd-ctl, or PD server API. Currently, among all the configuration items listed in [Modify PD configuration online](https://docs.pingcap.com/tidb/stable/dynamic-config#modify-pd-configuration-online), except `log.level`, all the other configuration items cannot be modified using parameters after the first start.

##### Configure PD microservices

> **Note:**
>
> Starting from v8.0.0, PD supports the [microservice mode](https://docs.pingcap.com/tidb/dev/pd-microservices) (experimental).

You can configure PD microservice using the `spec.pd.mode` and `spec.pdms` parameters of the TidbCluster CR. Currently, PD supports two microservices: the `tso` microservice and the `scheduling` microservice. The configuration example is as follows:

```yaml
spec:
pd:
mode: "ms"
pdms:
- name: "tso"
baseImage: pingcap/pd
replicas: 2
config: |
[log.file]
filename = "/pdms/log/tso.log"
- name: "scheduling"
baseImage: pingcap/pd
replicas: 1
config: |
[log.file]
filename = "/pdms/log/scheduling.log"
```

In the preceding configuration, `spec.pdms` is used to configure PD microservices, and the specific configuration parameters are the same as `spec.pd.config`. To get all the parameters that can be configured for PD microservices, see the [PD configuration file](https://docs.pingcap.com/tidb/stable/pd-configuration-file).

> **Note:**
>
> - If you deploy your TiDB cluster using CR, make sure that `config: {}` is set, no matter you want to modify `config` or not. Otherwise, PD microservice components might fail to start. This step is meant to be compatible with `Helm` deployment.
> - If you enable the PD microservice mode when you deploy a TiDB cluster, some configuration items of PD microservices are persisted in etcd. The persisted configuration in etcd takes precedence over that in PD.
> - If you enable the PD microservice mode for an existing TiDB cluster, some configuration items of PD microservices adopt the same values in PD configuration and are persisted in etcd. The persisted configuration in etcd takes precedence over that in PD.
> - Hence, after the first startup of PD microservices, you cannot modify these configuration items using parameters. Instead, you can modify them dynamically using [SQL statements](https://docs.pingcap.com/tidb/stable/dynamic-config#modify-pd-configuration-dynamically), [pd-ctl](https://docs.pingcap.com/tidb/stable/pd-control#config-show--set-option-value--placement-rules), or PD server API. Currently, among all the configuration items listed in [Modify PD configuration dynamically](https://docs.pingcap.com/tidb/stable/dynamic-config#modify-pd-configuration-dynamically), except `log.level`, all the other configuration items cannot be modified using parameters after the first startup of PD microservices.

#### Configure TiProxy parameters

TiProxy parameters can be configured by `spec.tiproxy.config` in TidbCluster Custom Resource.
Expand Down
17 changes: 11 additions & 6 deletions en/deploy-tidb-cluster-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -513,12 +513,13 @@ For a TiDB cluster deployed across Kubernetes clusters, to perform a rolling upg

2. Take step 1 as an example, perform the following upgrade operations in sequence:

1. If TiProxy is deployed in clusters, upgrade the TiProxy versions for all the Kubernetes clusters that have TiProxy deployed.
2. If TiFlash is deployed in clusters, upgrade the TiFlash versions for all the Kubernetes clusters that have TiFlash deployed.
3. Upgrade TiKV versions for all Kubernetes clusters.
4. If Pump is deployed in clusters, upgrade the Pump versions for all the Kubernetes clusters that have Pump deployed.
5. Upgrade TiDB versions for all Kubernetes clusters.
6. If TiCDC is deployed in clusters, upgrade the TiCDC versions for all the Kubernetes clusters that have TiCDC deployed.
1. If [PD microservices](https://docs.pingcap.com/tidb/dev/pd-microservices) (introduced in TiDB v8.0.0) are deployed in clusters, upgrade the version of PD microservices for all Kubernetes clusters that have PD microservices deployed.
2. If TiProxy is deployed in clusters, upgrade the TiProxy versions for all the Kubernetes clusters that have TiProxy deployed.
3. If TiFlash is deployed in clusters, upgrade the TiFlash versions for all the Kubernetes clusters that have TiFlash deployed.
4. Upgrade TiKV versions for all Kubernetes clusters.
5. If Pump is deployed in clusters, upgrade the Pump versions for all the Kubernetes clusters that have Pump deployed.
6. Upgrade TiDB versions for all Kubernetes clusters.
7. If TiCDC is deployed in clusters, upgrade the TiCDC versions for all the Kubernetes clusters that have TiCDC deployed.

## Exit and reclaim TidbCluster that already join a cross-Kubernetes cluster

Expand All @@ -528,6 +529,10 @@ When you need to make a cluster exit from the joined TiDB cluster deployed acros

Take the second TidbCluster created in [the last section](#step-2-deploy-the-new-tidbcluster-to-join-the-tidb-cluster) as an example. First, set the number of replicas of PD, TiKV, and TiDB to `0`. If you have enabled other components such as TiFlash, TiCDC, TiProxy, and Pump, set the number of these replicas to `0`:

> **Note:**
>
> Starting from v8.0.0, PD supports the microservice mode. If PD microservices are configured, you also need to set the `replicas` of the corresponding PD microservice component to `0` in the `pdms` configuration.

{{< copyable "shell-regular" >}}

```bash
Expand Down
58 changes: 56 additions & 2 deletions en/enable-tls-between-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,33 @@ This section describes how to issue certificates using two methods: `cfssl` and
...
```
> **Note:**
>
> Starting from v8.0.0, PD supports the [microservice mode](https://docs.pingcap.com/tidb/dev/pd-microservices) (experimental). To deploy PD microservices in your cluster, it is unnecessary to generate certificates for each component of PD microservices. Instead, you only need to add the host configurations for microservices to the `hosts` field of the `pd-server.json` file. Taking the `scheduling` microservice as an example, you need to configure the following items:
>
> ``` json
> ...
> "CN": "TiDB",
> "hosts": [
> "127.0.0.1",
> "::1",
> "${cluster_name}-pd",
> ...
> "*.${cluster_name}-pd-peer.${namespace}.svc",
> // The following are host configurations for the `scheduling` microservice
> "${cluster_name}-scheduling",
> "${cluster_name}-scheduling.${cluster_name}",
> "${cluster_name}-scheduling.${cluster_name}.svc",
> "${cluster_name}-scheduling-peer",
> "${cluster_name}-scheduling-peer.${cluster_name}",
> "${cluster_name}-scheduling-peer.${cluster_name}.svc",
> "*.${cluster_name}-scheduling-peer",
> "*.${cluster_name}-scheduling-peer.${cluster_name}",
> "*.${cluster_name}-scheduling-peer.${cluster_name}.svc",
> ],
> ...
> ```
`${cluster_name}` is the name of the cluster. `${namespace}` is the namespace in which the TiDB cluster is deployed. You can also add your customized `hosts`.
Finally, generate the PD server-side certificate:
Expand Down Expand Up @@ -1357,7 +1384,7 @@ In this step, you need to perform the following operations:
- Deploy a monitoring system
- Deploy the Pump component, and enable CN verification
1. Create a TiDB cluster:
1. Create a TiDB cluster with a monitoring system and the Pump component:
Create the `tidb-cluster.yaml` file:
Expand Down Expand Up @@ -1443,7 +1470,34 @@ In this step, you need to perform the following operations:
Execute `kubectl apply -f tidb-cluster.yaml` to create a TiDB cluster.
This operation also includes deploying a monitoring system and the Pump component.
> **Note:**
>
> Starting from v8.0.0, PD supports the [microservice mode](https://docs.pingcap.com/tidb/dev/pd-microservices) (experimental). To deploy PD microservices, you need to configure `cert-allowed-cn` for each microservice. Taking the Scheduling service as an example, you need to make the following configurations:
>
> - Update `pd.mode` to `ms`.
> - Configure the `security` field for the `scheduling` microservice.
>
> ```yaml
> pd:
> baseImage: pingcap/pd
> maxFailoverCount: 0
> replicas: 1
> requests:
> storage: "10Gi"
> config:
> security:
> cert-allowed-cn:
> - TiDB
> mode: "ms"
> pdms:
> - name: "scheduling"
> baseImage: pingcap/pd
> replicas: 1
> config:
> security:
> cert-allowed-cn:
> - TiDB
> ```
2. Create a Drainer component and enable TLS and CN verification:
Expand Down
26 changes: 26 additions & 0 deletions en/get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,6 +297,32 @@ tidbcluster.pingcap.com/basic created

If you need to deploy a TiDB cluster on an ARM64 machine, refer to [Deploying a TiDB Cluster on ARM64 Machines](deploy-cluster-on-arm64.md).

> **Note:**
>
> Starting from v8.0.0, PD supports the [microservice mode](https://docs.pingcap.com/tidb/dev/pd-microservices) (experimental). To deploy PD microservices, use the following command:
>
> ``` shell
> kubectl create namespace tidb-cluster && \
> kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/pd-micro-service-cluster.yaml
> ```
>
> View the Pod status:
>
> ``` shell
> watch kubectl get po -n tidb-cluster
> ```
>
> ```
> NAME READY STATUS RESTARTS AGE
> basic-discovery-6bb656bfd-xl5pb 1/1 Running 0 9m
> basic-pd-0 1/1 Running 0 9m
> basic-scheduling-0 1/1 Running 0 9m
> basic-tidb-0 2/2 Running 0 7m
> basic-tikv-0 1/1 Running 0 8m
> basic-tso-0 1/1 Running 0 9m
> basic-tso-1 1/1 Running 0 9m
> ```
### Deploy TiDB Dashboard independently
```shell
Expand Down
12 changes: 12 additions & 0 deletions en/modify-tidb-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,18 @@ Among all the PD configuration items listed in [Modify PD configuration online](

For TiDB clusters deployed on Kubernetes, if you need to modify the PD configuration, you can modify the configuration online using [SQL statements](https://docs.pingcap.com/tidb/stable/dynamic-config/#modify-pd-configuration-online), [pd-ctl](https://docs.pingcap.com/tidb/stable/pd-control#config-show--set-option-value--placement-rules), or PD server API.

### Modify PD microservice configuration

> **Note:**
>
> Starting from v8.0.0, PD supports the [microservice mode](https://docs.pingcap.com/tidb/dev/pd-microservices) (experimental).

After each component of the PD microservices is started for the first time, some PD configuration items are persisted in etcd. The persisted configuration in etcd takes precedence over the configuration file in PD. Therefore, after the first start of each PD microservice component, you cannot modify some PD configuration items by using the `TidbCluster` CR.

Among all the configuration items of PD microservices listed in [Modify PD configuration dynamically](https://docs.pingcap.com/tidb/stable/dynamic-config/#modify-pd-configuration-online), after the first start of each PD microservice component, only `log.level` can be modified by using the `TidbCluster` CR. Other configurations cannot be modified by using CR.

For TiDB clusters deployed on Kubernetes, if you need to modify configuration items of PD microservices, you can modify them dynamically using [SQL statements](https://docs.pingcap.com/tidb/stable/dynamic-config/#modify-pd-configuration-dynamically), [pd-ctl](https://docs.pingcap.com/tidb/stable/pd-control#config-show--set-option-value--placement-rules), or PD server API.

## Modify TiProxy configuration

Modifying the configuration of the TiProxy component never restarts the Pod. If you want to restart the Pod, you need to manually kill the Pod or change the Pod image to manually trigger the restart.
44 changes: 42 additions & 2 deletions en/scale-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,13 +161,13 @@ When the number of Pods for all components reaches the preset value and all comp
## Vertical scaling
Vertically scaling TiDB means that you scale TiDB up or down by increasing or decreasing the limit of resources on the Pod. Vertically scaling is essentially the rolling update of the Pods.
Vertically scaling TiDB means that you scale TiDB up or down by increasing or decreasing the limit of resources on the Pod. Vertical scaling is essentially the rolling update of the Pods.
### Vertically scale components
This section describes how to vertically scale up or scale down components including PD, TiKV, TiDB, TiProxy, TiFlash, and TiCDC.
- To scale up or scale down PD, TiKV, TiDB, and TiProxy, use kubectl to modify `spec.pd.resources`, `spec.tikv.resources`, and `spec.tidb.resources` in the `TidbCluster` object that corresponds to the cluster to desired values.
- To scale up or scale down PD, TiKV, TiDB, and TiProxy, use kubectl to modify `spec.pd.resources`, `spec.tikv.resources`, `spec.tidb.resources`, and `spec.tiproxy.replicas` in the `TidbCluster` object that corresponds to the cluster to desired values.
- To scale up or scale down TiFlash, modify the value of `spec.tiflash.resources`.
Expand All @@ -190,6 +190,46 @@ When all Pods are rebuilt and in the `Running` state, the vertical scaling is co
> - If the resource's `requests` field is modified during the vertical scaling process, and if PD, TiKV, and TiFlash use `Local PV`, they will be scheduled back to the original node after the upgrade. At this time, if the original node does not have enough resources, the Pod ends up staying in the `Pending` status and thus impacts the service.
> - TiDB is a horizontally scalable database, so it is recommended to take advantage of it simply by adding more nodes rather than upgrading hardware resources like you do with a traditional database.

### Scale PD microservice components

> **Note:**
>
> Starting from v8.0.0, PD supports the [microservice mode](https://docs.pingcap.com/tidb/dev/pd-microservices) (experimental).

PD microservices are typically used to address performance bottlenecks in PD and improve the quality of PD services. To determine whether it is necessary to scale PD microservices, see [PD microservice FAQs](https://docs.pingcap.com/tidb/dev/pd-microservices#FAQ).

- Currently, the PD microservices mode splits the timestamp allocation and cluster scheduling functions of PD into two independently deployed components: the `tso` microservice and the `scheduling` microservice.
- The `tso` microservice implements a primary-secondary architecture. If the `tso` microservice becomes the bottleneck, it is recommended to scale it vertically.
- The `scheduling` microservice serves as a scheduling component. If the `scheduling` microservice becomes the bottleneck, it is recommended to scale it horizontally.

- To vertically scale each component of PD microservices, use the `kubectl` command to modify the `spec.pdms.resources` of the `TidbCluster` object corresponding to the cluster to your desired value.

- To horizontally scale each component of PD microservices, use the `kubectl` command to modify `spec.pdms.replicas` of the `TidbCluster` object corresponding to the cluster to your desired value.

Taking the `scheduling` microservice as an example, the steps for horizontal scaling are as follows:

1. Modify the `replicas` value of the corresponding `TidbCluster` object to your desired value. For example, run the following command to set the `replicas` value of `scheduling` to `3`:

```shell
kubectl patch -n ${namespace} tc ${cluster_name} --type merge --patch '{"spec":{"pdms":{"name":"scheduling", "replicas":3}}}'
```

2. Check whether the corresponding TiDB cluster configuration for the Kubernetes cluster is updated:

```shell
kubectl get tidbcluster ${cluster_name} -n ${namespace} -oyaml
```

In the output of this command, the `scheduling.replicas` value of `spec.pdms` in `TidbCluster` is expected to be the same as the value you configured.

3. Observe whether the number of `TidbCluster` Pods is increased or decreased:

```shell
watch kubectl -n ${namespace} get pod -o wide
```

It usually takes about 10 to 30 seconds for PD microservice components to scale in or out.

## Scaling troubleshooting

During the horizontal or vertical scaling operation, Pods might go to the Pending state because of insufficient resources. See [Troubleshoot the Pod in Pending state](deploy-failures.md#the-pod-is-in-the-pending-state) to resolve it.
4 changes: 4 additions & 0 deletions en/suspend-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,10 @@ If you need to suspend the TiDB cluster, take the following steps:
* TiProxy
* PD

> **Note:**
>
> If [PD microservices](https://docs.pingcap.com/tidb/dev/pd-microservices) (introduced in TiDB v8.0.0) are deployed in a cluster, the Pods of PD microservices are deleted after the PD Pods are deleted.

## Restore TiDB cluster

After a TiDB cluster or its component is suspended, if you need to restore the TiDB cluster, take the following steps:
Expand Down
Loading

0 comments on commit 45a17bb

Please sign in to comment.