Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-block based filesystem volume creation is inconsistent across drivers #14903

Open
hamistao opened this issue Jan 31, 2025 · 0 comments
Open

Comments

@hamistao
Copy link
Contributor

On non-block based drivers, filesystem volumes can be unbounded, having access to the entire storage quota of its pool. When creating such volumes on btrfs and dir, the pool's volume.size config is ignored and unbounded volumes are created. But on zfs those volumes have their sizes limited by whatever is defined on volume.size, if not empty. Tests with dir with project quotas enabled are pending.
This behavior goes back to LXD 5.0 (as zfs wasn't supported on 4.0).

To reproduce this, one can run the following on the specified drivers and compare their behaviors:

lxc launch ubuntu:n c -s poolName
lxc storage volume create poolName vol
lxc storage volume attach poolName vol c /mnt/foo
lxc exec c -- df -h

Here is the output of lxd exec c -- df -h on btrfs in LXD 4.0, with c being a container created in a btrfs pool with volume.size=7GiB with custom volume mounted on /mnt/foo, similarly to what is done in the above reproduction:

Filesystem       Size  Used Avail Use% Mounted on
/dev/loop24      **30G**  1.1G   29G   4% /
none                492K  4.0K  488K   1% /dev
efivarfs            184K  155K   25K  87% /sys/firmware/efi/efivars
tmpfs               100K     0  100K   0% /dev/lxd
tmpfs               100K     0  100K   0% /dev/.lxd-mounts
tmpfs                12G     0   12G   0% /dev/shm
tmpfs               4.7G  164K  4.7G   1% /run
tmpfs               5.0M     0  5.0M   0% /run/lock
/dev/loop24      **30G**  1.1G   29G   4% /mnt/foo

For comparison, here is the same output when using zfs on 5.0/stable (zfs wasn't present on 4.0).

Filesystem              Size  Used Avail Use% Mounted on
zfs/containers/c        7.5G  483M  **7.0G**   7% /
none                    492K  4.0K  488K   1% /dev
efivarfs                184K  155K   25K  87% /sys/firmware/efi/efivars
tmpfs                   100K     0  100K   0% /dev/lxd
tmpfs                   100K     0  100K   0% /dev/.lxd-mounts
tmpfs                    12G     0   12G   0% /dev/shm
tmpfs                   4.7G  164K  4.7G   1% /run
tmpfs                   5.0M     0  5.0M   0% /run/lock
zfs/custom/default_vol  7.0G  128K  **7.0G**   1% /mnt/foo
I will try and test with dir with project quotas as well

One could argue that zfs is different because it can behave like a block based driver if it has volume.block_mode enabled. In which case it would have to set a size for all volumes, even filesystem ones, and then it makes to use the size defined in volume.size. So we would be making the usage of volume.size more consistent within the zfs driver, as it would apply to filesystem volumes independently of the value of volume.block_mode.

This discusstion was taken from #14829

In an unrelated note, other inconsistency is that volumes on a dir pool cannot have a defined size unless the user intentionally enables project quotas on the host filesystem. But setting sizes is possible all the same, though they are never actually applied, which is misleading to the user. It would be an option to error out if trying to set sizes in a dir pool withour project quotas enabled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant