Skip to content

Commit

Permalink
docs: remove the last mentions of preserve flag for Talos 1.8+
Browse files Browse the repository at this point in the history
This flag no longer exists in Talos 1.8 and higher.

Fixes #10172

Signed-off-by: Dmitriy Matrenichev <[email protected]>
  • Loading branch information
DmitriyMV committed Jan 21, 2025
1 parent 33c7f41 commit 683153a
Show file tree
Hide file tree
Showing 7 changed files with 2 additions and 24 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -61,5 +61,5 @@ aliases:
3. Deploying to your cluster

```bash
talosctl upgrade --image ghcr.io/your-username/talos-installer:<talos version> --preserve=true
talosctl upgrade --image ghcr.io/your-username/talos-installer:<talos version>
```
Original file line number Diff line number Diff line change
Expand Up @@ -102,9 +102,6 @@ ceph-filesystem rook-ceph.cephfs.csi.ceph.com Delete Immediate

## Talos Linux Considerations

It is important to note that a Rook Ceph cluster saves cluster information directly onto the node (by default `dataDirHostPath` is set to `/var/lib/rook`).
If running only a single `mon` instance, cluster management is little bit more involved, as any time a Talos Linux node is reconfigured or upgraded, the partition that stores the `/var` [file system]({{< relref "../../learn-more/architecture#the-file-system" >}}) is wiped, but the `--preserve` option of [`talosctl upgrade`]({{< relref "../../reference/cli#talosctl-upgrade" >}}) will ensure that doesn't happen.

By default, Rook configues Ceph to have 3 `mon` instances, in which case the data stored in `dataDirHostPath` can be regenerated from the other `mon` instances.
So when performing maintenance on a Talos Linux node with a Rook Ceph cluster (e.g. upgrading the Talos Linux version), it is imperative that care be taken to maintain the health of the Ceph cluster.
Before upgrading, you should always check the health status of the Ceph cluster to ensure that it is healthy.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ description: "Using local storage for Kubernetes workloads."
Using local storage for Kubernetes workloads implies that the pod will be bound to the node where the local storage is available.
Local storage is not replicated, so in case of a machine failure contents of the local storage will be lost.

> Note: when using `EPHEMERAL` Talos partition (`/var`), make sure to use `--preserve` set while performing upgrades, otherwise you risk losing data.
## `hostPath` mounts

The simplest way to use local storage is to use `hostPath` mounts.
Expand Down
3 changes: 1 addition & 2 deletions website/content/v1.10/learn-more/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ Directories like this are `overlayfs` backed by an XFS file system mounted at `/

The `/var` directory is owned by Kubernetes with the exception of the above `overlayfs` file systems.
This directory is writable and used by `etcd` (in the case of control plane nodes), the kubelet, and the CRI (containerd).
Its content survives machine reboots, but it is wiped and lost on machine upgrades and resets, unless the
`--preserve` option of [`talosctl upgrade`]({{< relref "../reference/cli#talosctl-upgrade" >}}) or the
Its content survives machine reboots and on machine upgrades, but it is wiped and lost on resets, unless the
`--system-labels-to-wipe` option of [`talosctl reset`]({{< relref "../reference/cli#talosctl-reset" >}})
is used.
8 changes: 0 additions & 8 deletions website/content/v1.10/talos-guides/upgrading-talos.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,6 @@ This scheme retains the previous Talos kernel and OS image following each upgrad
If an upgrade fails to boot, Talos will roll back to the previous version.
Likewise, Talos may be manually rolled back via API (or `talosctl rollback`), which will update the boot reference and reboot.

Unless explicitly told to `preserve` data, an upgrade will cause the node to wipe the [EPHEMERAL]({{< relref "../learn-more/architecture/#file-system-partitions" >}}) partition, remove itself from the etcd cluster (if it is a controlplane node), and make itself as pristine as is possible.
(This is the desired behavior except in specialised use cases such as single-node clusters.)

*Note* An upgrade of the Talos Linux OS will not (since v1.0) apply an upgrade to the Kubernetes version by default.
Kubernetes upgrades should be managed separately per [upgrading kubernetes]({{< relref "../kubernetes-guides/upgrading-kubernetes" >}}).

Expand Down Expand Up @@ -62,10 +59,6 @@ as:
--image ghcr.io/siderolabs/installer:{{< release >}}
```

There is an option to this command: `--preserve`, which will explicitly tell Talos to keep ephemeral data intact.
In most cases, it is correct to let Talos perform its default action of erasing the ephemeral data.
However, for a single-node control-plane, make sure that `--preserve=true`.

Rarely, an upgrade command will fail due to a process holding a file open on disk.
In these cases, you can use the `--stage` flag.
This puts the upgrade artifacts on disk, and adds some metadata to a disk partition that gets checked very early in the boot process, then reboots the node.
Expand Down Expand Up @@ -154,7 +147,6 @@ From the user's standpoint, however, the processes are identical.
However, since control plane nodes run additional services, such as etcd, there are some extra steps and checks performed on them.
For instance, Talos will refuse to upgrade a control plane node if that upgrade would cause a loss of quorum for etcd.
If multiple control plane nodes are asked to upgrade at the same time, Talos will protect the Kubernetes cluster by ensuring only one control plane node actively upgrades at any time, via checking etcd quorum.
If running a single-node cluster, and you want to force an upgrade despite the loss of quorum, you can set `preserve` to `true`.

**Q.** Can I break my cluster by upgrading everything at once?

Expand Down
4 changes: 0 additions & 4 deletions website/content/v1.8/talos-guides/upgrading-talos.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,6 @@ This scheme retains the previous Talos kernel and OS image following each upgrad
If an upgrade fails to boot, Talos will roll back to the previous version.
Likewise, Talos may be manually rolled back via API (or `talosctl rollback`), which will update the boot reference and reboot.

Unless explicitly told to `preserve` data, an upgrade will cause the node to wipe the [EPHEMERAL]({{< relref "../learn-more/architecture/#file-system-partitions" >}}) partition, remove itself from the etcd cluster (if it is a controlplane node), and make itself as pristine as is possible.
(This is the desired behavior except in specialised use cases such as single-node clusters.)

*Note* An upgrade of the Talos Linux OS will not (since v1.0) apply an upgrade to the Kubernetes version by default.
Kubernetes upgrades should be managed separately per [upgrading kubernetes]({{< relref "../kubernetes-guides/upgrading-kubernetes" >}}).

Expand Down Expand Up @@ -158,7 +155,6 @@ From the user's standpoint, however, the processes are identical.
However, since control plane nodes run additional services, such as etcd, there are some extra steps and checks performed on them.
For instance, Talos will refuse to upgrade a control plane node if that upgrade would cause a loss of quorum for etcd.
If multiple control plane nodes are asked to upgrade at the same time, Talos will protect the Kubernetes cluster by ensuring only one control plane node actively upgrades at any time, via checking etcd quorum.
If running a single-node cluster, and you want to force an upgrade despite the loss of quorum, you can set `preserve` to `true`.

**Q.** Can I break my cluster by upgrading everything at once?

Expand Down
4 changes: 0 additions & 4 deletions website/content/v1.9/talos-guides/upgrading-talos.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,6 @@ This scheme retains the previous Talos kernel and OS image following each upgrad
If an upgrade fails to boot, Talos will roll back to the previous version.
Likewise, Talos may be manually rolled back via API (or `talosctl rollback`), which will update the boot reference and reboot.

Unless explicitly told to `preserve` data, an upgrade will cause the node to wipe the [EPHEMERAL]({{< relref "../learn-more/architecture/#file-system-partitions" >}}) partition, remove itself from the etcd cluster (if it is a controlplane node), and make itself as pristine as is possible.
(This is the desired behavior except in specialised use cases such as single-node clusters.)

*Note* An upgrade of the Talos Linux OS will not (since v1.0) apply an upgrade to the Kubernetes version by default.
Kubernetes upgrades should be managed separately per [upgrading kubernetes]({{< relref "../kubernetes-guides/upgrading-kubernetes" >}}).

Expand Down Expand Up @@ -157,7 +154,6 @@ From the user's standpoint, however, the processes are identical.
However, since control plane nodes run additional services, such as etcd, there are some extra steps and checks performed on them.
For instance, Talos will refuse to upgrade a control plane node if that upgrade would cause a loss of quorum for etcd.
If multiple control plane nodes are asked to upgrade at the same time, Talos will protect the Kubernetes cluster by ensuring only one control plane node actively upgrades at any time, via checking etcd quorum.
If running a single-node cluster, and you want to force an upgrade despite the loss of quorum, you can set `preserve` to `true`.

**Q.** Can I break my cluster by upgrading everything at once?

Expand Down

0 comments on commit 683153a

Please sign in to comment.