-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to restore thin pool LVs and thin LV via vgcfgrestore? #126
Comments
The thin pool metadata holds all the mappings between the thin devices and the the 'data device' (which is typically a hidden LV). These mappings are changed by the kernel when blocks are provisioned, or writes to snapshots occur. The metadata changes far too quickly for it to be part of the LVM metadata. Because it's constantly changing we cannot back it up in any meaningful way unless the pool is never used between the backup and restore. Could you tell me some more about what you're trying to do please? eg, if you're moving the pool to a new machine you just need to activate the metadata and data devices (but not the pool), and copy them across. |
vgcfgrestore --force is there purely for the purpose of 'repairing' metadata when user is well aware of the context - --force is not meant to be used in random way here (aka. let's try and see). When thin-pool metadata are repaired with external tool (thin_repair) - they need to be then matching lvm2 user space metadata and transaction id stored in them - this is currently non-automated manual work, where skilled user may need to update lvm2 metadata if some volumes/transactions got lost during kernel metadata repair operation. To be more explicit here - lvm2 metadata store information about LV size, name, dates and which segment of a PV keeps info for segments in LV (typical granularity 4M) (only occasionally changed) Currently there is not much point in support of storing backups of kernel metadata - since they are becoming invalid relatively quickly (i.e. bTrees are no longer addressing same data blocks) - there is only 1 limited case - if thin-pool has been used for 1 thin-pool device without discard - in that case 'allocated' kernel blocks would be more or less persistent - but why would anyone used thin-pool in this case?? when plain Linear LV does this job in way more elegant way... |
@jthornber We are trying to clone or image the file systems on LV, including thin provisioning, as requested here: |
@zkabelac Thank you very much. As mentioned previously, we'd like to take an image for the whole file systems on the partitions and LV, including thin provisioning. Since the file system is in unmounted status, hence the data on it won't be changed, there should be some way we can image it. |
Sounds like you just want to copy the metadata and data devices then.
…On Thu, 19 Dec 2019 at 12:55, bryancooper989 ***@***.***> wrote:
@zkabelac <https://github.com/zkabelac> Thank you very much. As mentioned
previously, we'd like to take an image for the whole file systems on the
partitions and LV, including thin provisioning. Since the file system is in
unmounted status, hence the data on it won't be changed, there should be
some way we can image it.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#126?email_source=notifications&email_token=AABOSQ5QCBHLFGNZMO4CF23QZNVN5A5CNFSM4J45AVFKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHJQQ3Q#issuecomment-567478382>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABOSQYYQV5J3E4L3V27HX3QZNVN5ANCNFSM4J45AVFA>
.
|
Yes, we do. So what's the best way to do this? Thank you so much. Steven |
At this moment - you can 'activate' thin-pool's data & metadata device in so-called 'component' activation mode - so both subLVs can be activated in read-only mode - you can copy these LVs If you are copying 'whole' PV - you then get whole copy of lvm2 metadata with all content of the PV itself - but that also mean 100% copy of whole device (even unused portions of it) ATM there is no other 'cloning' mechanism available - but it could be possibly interesting RFE... It just need to be clear 'vgcfgbackup/vgcfgrestore' can't be see as 'cloning' mechanism - it expects underlying content of device is matching restored lvm2 metadata - lvm2 is not solving this. |
Thank you very much. Could you please let me know how to "'activate' thin-pool's data & metadata device in so-called 'component' activation mode"? Steven |
Component activation is supported from version >= 2.02.178. |
Thanks for your reply. I am running Debian Sid, so the lvm2 is 2.03: When I tried to active the component, it failed as the following: root@debian:~# lvchange -ay -K -v /dev/testVG/testVG-test--pool_tmeta root@debian:~# lvchange -ay -v /dev/testVG/testVG-test--pool_tmeta root@debian:~# lvchange -ay -K -v /dev/testVG/testVG-test--pool_tdata root@debian:~# lvchange -ay -v /dev/testVG/testVG-test--pool_tdata Looks like something more I have to do so that the components can be activated? Steven |
Please always use VG/LV name. Assuming your VG name is: testVG-test So in your case: lvchange -ay testVG-test/pool_tdata |
Sorry for the late response. Now I am still having problem to active the thin volume. root@debian:# fdisk /dev/sda Welcome to fdisk (util-linux 2.35.2). Command (m for help): n Created a new partition 2 of type 'Linux' and of size 191 MiB. Command (m for help): wq root@debian:# cat lvm_testVG.conf contents = "Text Format Volume Group" description = "vgcfgbackup -f /tmp/vgcfg_tmp.5KB91l testVG" creation_host = "debian" # Linux debian 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64 testVG {
} root@debian:# pvscan root@debian:# vgcfgrestore --force -f lvm_testVG.conf testVG root@debian:# vgscan root@debian:# lvscan root@debian:# lvchange -ay testVG-test/pool_tdata root@debian:# lvchange -ay testVG/test-pool root@debian:# lvchange -ay testVG/test-pool_tdata root@debian:# lvchange -ay testVG/test-pool_tmeta root@debian:# lvscan root@debian:# ls -l /dev/testVG/ root@debian:# lvchange -ay testVG root@debian:~# lvscan |
hi, is there a roadmap for this? clonezilla fails to clone block devices containing vg with thin pools, they mention this issue in their support mailing lists. |
What do you mean by 'clone' ?
…On Fri, 30 Jun 2023 at 12:02, mailinglists35 ***@***.***> wrote:
hi, is there a roadmap for this? clonezilla fails to clone block devices
containing vg with thin pools, they mention this issue in their support
mailing lists.
—
Reply to this email directly, view it on GitHub
<#126 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABOSQZOKRVEABUJ5PFSX4TXN2W23ANCNFSM4J45AVFA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
what they do is activate all LV's and dump their contents one by one using partclone/partimage/dd, and also they do some vgcfgbackup/vgcfgrestore I believe. I haven't examined the logs in detail, but the bottom line is they fail to duplicate VGs that contain thin LVs, and invoke particularly this github issue as the root cause for the inability to "clone". |
this is the referenced clonezilla discussion |
Sounds like you want this tool that I wrote last year.
https://github.com/jthornber/blk-archive
…On Fri, 30 Jun 2023 at 12:33, mailinglists35 ***@***.***> wrote:
this is the referenced clonezilla discussion
https://sourceforge.net/p/clonezilla/bugs/330/
—
Reply to this email directly, view it on GitHub
<#126 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABOSQ3AQ4IQYOCS3BNM57DXN22QBANCNFSM4J45AVFA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I dd copied them, but how do I restore them, since after doing a vgcfgrestore it is only allowed to activate them readonly? |
For a 'dd' - you just 'lvcreate LV1' for data, then lvcreate LV2' for metadata, and then you make a thin-pool from both LVs and join them into a thin-pool - with parameter you can specify - getting the the info from 'vgcfgrestore', We already have some similar request for 'empty' VG - but currently it's not supported by upstream lvm2. |
As a 'secondary' idea - you could just 'reload' tables for 'read-only' activated volumes with 'dmsetup load' and make devices 'writable' - currently this kind of operations is not supported by lvm2 - eventually we could introduce some 'read-write' component activation - which would be tracked by lvm2 metadata - as the risk of data corruption for user when data are 'writeable' is very high... |
Yes, blk-archive helps imaging thin devices, and you can restore archived thin-devices to another thin-pool. However, in the case of disk cloning, this way might not be faster than simply copying thin-pool components (metadata & data devices), since an intermediate archiving process is introduced, and extra storage is required. Right now, a simple 'dd' of component LVs might be the easiest way for disk migration of thin-pools. Further optimizations could be made by copying only metadata/data blocks in-use, like what the Partclone does, yet there's no similar feature in thin-provisioning-tools. |
hey @stevenshiau what do you think of this approach for clonezilla? |
"dd dumping the tmeta and tdata devices" -> If you can be more specific, especially with some of examples, we can do more tests to see if it works well. |
see
in your previous comments, these are the components of the thin pool
you can not activate at the same time the components and the thin LVs. you either activate thin LVs normally OR activate only the thin pool components. the second case is what you have to do to in order clone the whole thin pool (NOT individual thin LVs!). to enumerate thin pool metadata components: thin pool data components: |
@stevenshiau for the restoring part: after doing for converting the device mapper table from ro to rw:
after finishing the |
Be aware that the component LVs might be virtual targets (targets that are not map to PV directly), e.g., cache, writecache, or raid. You should handle those target types with care. For example, given a cached thin-pool where the cache-pool is located in other drives, it's better to uncache the thin-pool ( |
@mailinglists35 Today I finally got some time to give it a try. However, here I do not find anything after doing |
I will try to setup a simulation environment and show what I do step by step |
In the lvmthin manul, it says:
lvremove of thin pool LVs, thin LVs and snapshots cannot be reversed with vgcfgrestore.
vgcfgbackup does not back up thin pool metadata.
I tried to use vgcfgbackup to save the LVM layout which includes thin pool LVs and thin LVs. When restore it by "vgcfgrestore --force", the thin pool LVs and thin LVs are restored. However, I just can not make it active..
The following is the issue I have:
root@debian:~# lvscan
inactive '/dev/testVG/test-pool' [2.00 GiB] inherit
inactive '/dev/testVG/test-thinLV' [1.00 GiB] inherit
root@debian:~# lvchange -ay -K -v /dev/testVG/test-pool
Activating logical volume testVG/test-pool.
activation/volume_list configuration setting not defined: Checking only host tags for testVG/test-pool.
Creating testVG-test--pool_tmeta
Loading table for testVG-test--pool_tmeta (254:0).
Resuming testVG-test--pool_tmeta (254:0).
Creating testVG-test--pool_tdata
Loading table for testVG-test--pool_tdata (254:1).
Resuming testVG-test--pool_tdata (254:1).
Executing: /usr/sbin/thin_check -q /dev/mapper/testVG-test--pool_tmeta
Creating testVG-test--pool-tpool
Loading table for testVG-test--pool-tpool (254:2).
Resuming testVG-test--pool-tpool (254:2).
Thin pool testVG-test--pool-tpool (254:2) transaction_id is 2, while expected 1.
Removing testVG-test--pool-tpool (254:2)
Removing testVG-test--pool_tdata (254:1)
Removing testVG-test--pool_tmeta (254:0)
So my question, how can I backup the LVM config, including thin provisioning, then restore it on a blank device?
Thank you very much.
Steven
The text was updated successfully, but these errors were encountered: