Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug fix/documentation request: background target doesn't work? #764

Open
presto8 opened this issue Oct 12, 2024 · 3 comments
Open

Bug fix/documentation request: background target doesn't work? #764

presto8 opened this issue Oct 12, 2024 · 3 comments

Comments

@presto8
Copy link

presto8 commented Oct 12, 2024

I set up a 3-drive filesystem with hdd.hdd1, hdd.hdd2, and hdd.hdd3. The filesystem's targets were all left to the default values (none). metadata-replicas was set to 3, all other replica values set to 1. I then created a subvolume and set the foreground and background targets for the subvolume to hdd.hdd3. Finally, I ran bcache data rereplicate.

I then unmounted the filesystem, removed hdd.hdd1 and hdd.hdd2, and re-mounted the filesystem with only hdd.hdd3 and -o very_degraded. I expected the files in the subvolume to be available since the foreground and background target were set to hdd.hdd3. However, trying to access the files in the subvolume resulted in many I/O errors.

The documentation at https://bcachefs.org/GettingStarted/ states "For a multi device filesystem, with sda1 caching sdb1: [...] This will configure the filesystem so that writes will be buffered to /dev/sda1 before being written back to /dev/sdb1 in the background, and that hot data will be promoted to /dev/sda1 for faster access."

However, this does not seem to be the case. Is this feature working or is my configuration incorrect in some way? Or if the feature is not working, should the documentation be updated to reflect that?

@jpsollie
Copy link
Contributor

I do not think bcachefs stores metadata on a bg target in a multi-tiered filesystem.
you can check this with "bcachefs fs usage (mountpoint)" and check of your bg target has btree data.
if not, you need to set metadata_target explicitly with --metadata-target=hdd1,hdd2,hdd3

@presto8
Copy link
Author

presto8 commented Oct 24, 2024

Actually, the metadata was replicated! I was able to do a "ls" on the directory and all of the files showed up. But, when trying to access the files (like cp file /dev/null), then file i/o errors were observed. I didn't explain that as clearly as I could have. But it appears that the data has not been moved to the target. I am not sure if there is anyway to get finer detail on where data lives, on a per-file or per-sub-volume basis?

@jpsollie
Copy link
Contributor

maybe you should check your dmesg: if any file io fails, it could simply be a checksum error. due to an invalid write

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants