diff --git a/website/content/v1.10/advanced/proprietary-kernel-modules.md b/website/content/v1.10/advanced/proprietary-kernel-modules.md index 21e3dc308c..9a42ef4d7f 100644 --- a/website/content/v1.10/advanced/proprietary-kernel-modules.md +++ b/website/content/v1.10/advanced/proprietary-kernel-modules.md @@ -61,5 +61,5 @@ aliases: 3. Deploying to your cluster ```bash - talosctl upgrade --image ghcr.io/your-username/talos-installer: --preserve=true + talosctl upgrade --image ghcr.io/your-username/talos-installer: ``` diff --git a/website/content/v1.10/kubernetes-guides/configuration/ceph-with-rook.md b/website/content/v1.10/kubernetes-guides/configuration/ceph-with-rook.md index 17f62bb824..31843c38d0 100644 --- a/website/content/v1.10/kubernetes-guides/configuration/ceph-with-rook.md +++ b/website/content/v1.10/kubernetes-guides/configuration/ceph-with-rook.md @@ -102,9 +102,6 @@ ceph-filesystem rook-ceph.cephfs.csi.ceph.com Delete Immediate ## Talos Linux Considerations -It is important to note that a Rook Ceph cluster saves cluster information directly onto the node (by default `dataDirHostPath` is set to `/var/lib/rook`). -If running only a single `mon` instance, cluster management is little bit more involved, as any time a Talos Linux node is reconfigured or upgraded, the partition that stores the `/var` [file system]({{< relref "../../learn-more/architecture#the-file-system" >}}) is wiped, but the `--preserve` option of [`talosctl upgrade`]({{< relref "../../reference/cli#talosctl-upgrade" >}}) will ensure that doesn't happen. - By default, Rook configues Ceph to have 3 `mon` instances, in which case the data stored in `dataDirHostPath` can be regenerated from the other `mon` instances. So when performing maintenance on a Talos Linux node with a Rook Ceph cluster (e.g. upgrading the Talos Linux version), it is imperative that care be taken to maintain the health of the Ceph cluster. Before upgrading, you should always check the health status of the Ceph cluster to ensure that it is healthy. diff --git a/website/content/v1.10/kubernetes-guides/configuration/local-storage.md b/website/content/v1.10/kubernetes-guides/configuration/local-storage.md index 738fd973ba..ef3e9529e0 100644 --- a/website/content/v1.10/kubernetes-guides/configuration/local-storage.md +++ b/website/content/v1.10/kubernetes-guides/configuration/local-storage.md @@ -6,8 +6,6 @@ description: "Using local storage for Kubernetes workloads." Using local storage for Kubernetes workloads implies that the pod will be bound to the node where the local storage is available. Local storage is not replicated, so in case of a machine failure contents of the local storage will be lost. -> Note: when using `EPHEMERAL` Talos partition (`/var`), make sure to use `--preserve` set while performing upgrades, otherwise you risk losing data. - ## `hostPath` mounts The simplest way to use local storage is to use `hostPath` mounts. diff --git a/website/content/v1.10/learn-more/architecture.md b/website/content/v1.10/learn-more/architecture.md index f03cf76d5e..4b0c38dfbb 100644 --- a/website/content/v1.10/learn-more/architecture.md +++ b/website/content/v1.10/learn-more/architecture.md @@ -50,7 +50,6 @@ Directories like this are `overlayfs` backed by an XFS file system mounted at `/ The `/var` directory is owned by Kubernetes with the exception of the above `overlayfs` file systems. This directory is writable and used by `etcd` (in the case of control plane nodes), the kubelet, and the CRI (containerd). -Its content survives machine reboots, but it is wiped and lost on machine upgrades and resets, unless the -`--preserve` option of [`talosctl upgrade`]({{< relref "../reference/cli#talosctl-upgrade" >}}) or the +Its content survives machine reboots and on machine upgrades, but it is wiped and lost on resets, unless the `--system-labels-to-wipe` option of [`talosctl reset`]({{< relref "../reference/cli#talosctl-reset" >}}) is used. diff --git a/website/content/v1.10/talos-guides/upgrading-talos.md b/website/content/v1.10/talos-guides/upgrading-talos.md index 5edd5dfe70..a54b4daf29 100644 --- a/website/content/v1.10/talos-guides/upgrading-talos.md +++ b/website/content/v1.10/talos-guides/upgrading-talos.md @@ -15,9 +15,6 @@ This scheme retains the previous Talos kernel and OS image following each upgrad If an upgrade fails to boot, Talos will roll back to the previous version. Likewise, Talos may be manually rolled back via API (or `talosctl rollback`), which will update the boot reference and reboot. -Unless explicitly told to `preserve` data, an upgrade will cause the node to wipe the [EPHEMERAL]({{< relref "../learn-more/architecture/#file-system-partitions" >}}) partition, remove itself from the etcd cluster (if it is a controlplane node), and make itself as pristine as is possible. -(This is the desired behavior except in specialised use cases such as single-node clusters.) - *Note* An upgrade of the Talos Linux OS will not (since v1.0) apply an upgrade to the Kubernetes version by default. Kubernetes upgrades should be managed separately per [upgrading kubernetes]({{< relref "../kubernetes-guides/upgrading-kubernetes" >}}). @@ -62,10 +59,6 @@ as: --image ghcr.io/siderolabs/installer:{{< release >}} ``` -There is an option to this command: `--preserve`, which will explicitly tell Talos to keep ephemeral data intact. -In most cases, it is correct to let Talos perform its default action of erasing the ephemeral data. -However, for a single-node control-plane, make sure that `--preserve=true`. - Rarely, an upgrade command will fail due to a process holding a file open on disk. In these cases, you can use the `--stage` flag. This puts the upgrade artifacts on disk, and adds some metadata to a disk partition that gets checked very early in the boot process, then reboots the node. @@ -154,7 +147,6 @@ From the user's standpoint, however, the processes are identical. However, since control plane nodes run additional services, such as etcd, there are some extra steps and checks performed on them. For instance, Talos will refuse to upgrade a control plane node if that upgrade would cause a loss of quorum for etcd. If multiple control plane nodes are asked to upgrade at the same time, Talos will protect the Kubernetes cluster by ensuring only one control plane node actively upgrades at any time, via checking etcd quorum. -If running a single-node cluster, and you want to force an upgrade despite the loss of quorum, you can set `preserve` to `true`. **Q.** Can I break my cluster by upgrading everything at once? diff --git a/website/content/v1.8/talos-guides/upgrading-talos.md b/website/content/v1.8/talos-guides/upgrading-talos.md index 7b88319c0c..cd7cd15203 100644 --- a/website/content/v1.8/talos-guides/upgrading-talos.md +++ b/website/content/v1.8/talos-guides/upgrading-talos.md @@ -15,9 +15,6 @@ This scheme retains the previous Talos kernel and OS image following each upgrad If an upgrade fails to boot, Talos will roll back to the previous version. Likewise, Talos may be manually rolled back via API (or `talosctl rollback`), which will update the boot reference and reboot. -Unless explicitly told to `preserve` data, an upgrade will cause the node to wipe the [EPHEMERAL]({{< relref "../learn-more/architecture/#file-system-partitions" >}}) partition, remove itself from the etcd cluster (if it is a controlplane node), and make itself as pristine as is possible. -(This is the desired behavior except in specialised use cases such as single-node clusters.) - *Note* An upgrade of the Talos Linux OS will not (since v1.0) apply an upgrade to the Kubernetes version by default. Kubernetes upgrades should be managed separately per [upgrading kubernetes]({{< relref "../kubernetes-guides/upgrading-kubernetes" >}}). @@ -158,7 +155,6 @@ From the user's standpoint, however, the processes are identical. However, since control plane nodes run additional services, such as etcd, there are some extra steps and checks performed on them. For instance, Talos will refuse to upgrade a control plane node if that upgrade would cause a loss of quorum for etcd. If multiple control plane nodes are asked to upgrade at the same time, Talos will protect the Kubernetes cluster by ensuring only one control plane node actively upgrades at any time, via checking etcd quorum. -If running a single-node cluster, and you want to force an upgrade despite the loss of quorum, you can set `preserve` to `true`. **Q.** Can I break my cluster by upgrading everything at once? diff --git a/website/content/v1.9/talos-guides/upgrading-talos.md b/website/content/v1.9/talos-guides/upgrading-talos.md index 03a6adf9a6..b2f7e0315a 100644 --- a/website/content/v1.9/talos-guides/upgrading-talos.md +++ b/website/content/v1.9/talos-guides/upgrading-talos.md @@ -15,9 +15,6 @@ This scheme retains the previous Talos kernel and OS image following each upgrad If an upgrade fails to boot, Talos will roll back to the previous version. Likewise, Talos may be manually rolled back via API (or `talosctl rollback`), which will update the boot reference and reboot. -Unless explicitly told to `preserve` data, an upgrade will cause the node to wipe the [EPHEMERAL]({{< relref "../learn-more/architecture/#file-system-partitions" >}}) partition, remove itself from the etcd cluster (if it is a controlplane node), and make itself as pristine as is possible. -(This is the desired behavior except in specialised use cases such as single-node clusters.) - *Note* An upgrade of the Talos Linux OS will not (since v1.0) apply an upgrade to the Kubernetes version by default. Kubernetes upgrades should be managed separately per [upgrading kubernetes]({{< relref "../kubernetes-guides/upgrading-kubernetes" >}}). @@ -157,7 +154,6 @@ From the user's standpoint, however, the processes are identical. However, since control plane nodes run additional services, such as etcd, there are some extra steps and checks performed on them. For instance, Talos will refuse to upgrade a control plane node if that upgrade would cause a loss of quorum for etcd. If multiple control plane nodes are asked to upgrade at the same time, Talos will protect the Kubernetes cluster by ensuring only one control plane node actively upgrades at any time, via checking etcd quorum. -If running a single-node cluster, and you want to force an upgrade despite the loss of quorum, you can set `preserve` to `true`. **Q.** Can I break my cluster by upgrading everything at once?