You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On non-block based drivers, filesystem volumes can be unbounded, having access to the entire storage quota of its pool. When creating such volumes on btrfs and dir, the pool's volume.size config is ignored and unbounded volumes are created. But on zfs those volumes have their sizes limited by whatever is defined on volume.size, if not empty. Tests with dir with project quotas enabled are pending.
This behavior goes back to LXD 5.0 (as zfs wasn't supported on 4.0).
To reproduce this, one can run the following on the specified drivers and compare their behaviors:
lxc launch ubuntu:n c -s poolName
lxc storage volume create poolName vol
lxc storage volume attach poolName vol c /mnt/foo
lxc exec c -- df -h
Here is the output of lxd exec c -- df -h on btrfs in LXD 4.0, with c being a container created in a btrfs pool with volume.size=7GiB with custom volume mounted on /mnt/foo, similarly to what is done in the above reproduction:
For comparison, here is the same output when using zfs on 5.0/stable (zfs wasn't present on 4.0).
Filesystem Size Used Avail Use% Mounted on
zfs/containers/c 7.5G 483M **7.0G** 7% /
none 492K 4.0K 488K 1% /dev
efivarfs 184K 155K 25K 87% /sys/firmware/efi/efivars
tmpfs 100K 0 100K 0% /dev/lxd
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 12G 0 12G 0% /dev/shm
tmpfs 4.7G 164K 4.7G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
zfs/custom/default_vol 7.0G 128K **7.0G** 1% /mnt/foo
I will try and test with dir with project quotas as well
One could argue that zfs is different because it can behave like a block based driver if it has volume.block_mode enabled. In which case it would have to set a size for all volumes, even filesystem ones, and then it makes to use the size defined in volume.size. So we would be making the usage of volume.size more consistent within the zfs driver, as it would apply to filesystem volumes independently of the value of volume.block_mode.
In an unrelated note, other inconsistency is that volumes on a dir pool cannot have a defined size unless the user intentionally enables project quotas on the host filesystem. But setting sizes is possible all the same, though they are never actually applied, which is misleading to the user. It would be an option to error out if trying to set sizes in a dir pool withour project quotas enabled.
The text was updated successfully, but these errors were encountered:
On non-block based drivers, filesystem volumes can be unbounded, having access to the entire storage quota of its pool. When creating such volumes on
btrfs
anddir
, the pool'svolume.size
config is ignored and unbounded volumes are created. But on zfs those volumes have their sizes limited by whatever is defined onvolume.size
, if not empty. Tests withdir
with project quotas enabled are pending.This behavior goes back to LXD 5.0 (as
zfs
wasn't supported on 4.0).To reproduce this, one can run the following on the specified drivers and compare their behaviors:
Here is the output of
lxd exec c -- df -h
onbtrfs
in LXD 4.0, withc
being a container created in a btrfs pool with volume.size=7GiB with custom volume mounted on/mnt/foo
, similarly to what is done in the above reproduction:For comparison, here is the same output when using zfs on 5.0/stable (zfs wasn't present on 4.0).
One could argue that zfs is different because it can behave like a block based driver if it has
volume.block_mode
enabled. In which case it would have to set a size for all volumes, even filesystem ones, and then it makes to use the size defined involume.size
. So we would be making the usage ofvolume.size
more consistent within the zfs driver, as it would apply to filesystem volumes independently of the value ofvolume.block_mode
.This discusstion was taken from #14829
In an unrelated note, other inconsistency is that volumes on a
dir
pool cannot have a defined size unless the user intentionally enables project quotas on the host filesystem. But setting sizes is possible all the same, though they are never actually applied, which is misleading to the user. It would be an option to error out if trying to set sizes in adir
pool withour project quotas enabled.The text was updated successfully, but these errors were encountered: