Replies: 5 comments
-
Thanks for opening your first issue here! Be sure to follow the issue template! |
Beta Was this translation helpful? Give feedback.
-
@Jayd603 , is your system vm template seeded on your secondary storage? The path |
Beta Was this translation helpful? Give feedback.
-
Yeah, the NFS client says "protocol not supported" when it can't find a directory, just a tiny bit misleading response there. 😄 ... so, what actually happened was on initial zone set-up, the set up utility let me configure a non-usable NFS secondary storage server and carried on with no errors, this required editing the db and doing some digging to fix. It's a sort of bug, since it should not allow a non-working secondary storage server to be added. |
Beta Was this translation helpful? Give feedback.
-
@Jayd603 , can you remove the zone/storage and re-add instead? |
Beta Was this translation helpful? Give feedback.
-
It takes a lot of steps to delete a zone and I don't remember if it removed the stale/bad secondary storage entry. I think I added a new one ok and everything worked but the old one still needed to be removed in the DB. I /might/ be wrong about that, but there should still be some type of basic check when adding secondary storage to prevent issues. --- it's obviously a minor issue, I have this cluster up and running , just need to add more host machines now. thanks |
Beta Was this translation helpful? Give feedback.
-
Latest cloudstack,
I just did a new cloudstack install, set up first zone, I accidentally used the wrong secondary NFS storage path, the set-up process gave no errors. Now, after creating new and correct secondary storage, I cannot migrate and cannot delete the old one. Zone set-up script should test connection and write ability of the secondary storage I'm thinking. Now i'm just looking for best way to fix, might just be easier to destroy the zone and re-create.
Oh, and, there are now templates and ISOs in "Not Ready" state with no option to remove them in the UI.
I'm trying to fix in the db now.
Update: easily removed the Not ready templates in vm_template table, now looking at changing secondary storage path
Update 2: I was able to use the UI to remove the old seconary storage after doing,
DELETE from template_store_ref WHERE state='Allocated';
DELETE FROM cloud.vm_template WHERE unique_name="<uid/name>";
Testing things now to make sure nothing is broken.
Thus far, still cannot register any new ISOs, they stay Not Ready, I tested ability to connect to the NFS share and write from the KVM host and mgmt host
2024-04-26 15:48:07,466 INFO [o.a.c.s.SecondaryStorageManagerImpl] (secstorage-1:ctx-0dc20f4d) (logid:1f8662b9) Unable to start secondary storage VM [333] for standby capacity, it will be recycled and will start a new one.
2024-04-26 15:48:07,466 DEBUG [c.c.a.SecondaryStorageVmAlertAdapter] (secstorage-1:ctx-0dc20f4d) (logid:1f8662b9) received secondary storage vm alert
2024-04-26 15:48:07,476 INFO [o.a.c.s.PremiumSecondaryStorageManagerImpl] (secstorage-1:ctx-0dc20f4d) (logid:1f8662b9) Primary secondary storage is not even started, wait until next turn
Hmm, possibly other secondary storage access issues.
/usr/bin/mount -o nodev,nosuid,noexec 192.168.22.50:/mnt/SSDStorage1/Cloudstack/template/tmpl/1/3 /mnt/a3313f14-20d0-3952-a7e1-571f5aa1654a) unexpected exit status 32: mount.nfs: Protocol not supported
that would be it
tried setting secstorage.nfs.version to 3 in cloudstack ui, no luck, going to try to force version on the KVM host - i'm digressing away from the initial bug at this point tho, i'll shutup now
Beta Was this translation helpful? Give feedback.
All reactions