-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Empty thin pool LV seems to be taken into account for capacity monitoring #210
Comments
Still same, if left capacity not enough, the k8s will stuck for not enough space, can't allocate the thin volume. |
I tried to reproduce this but unable to. Below are the details The VG is this:
Created two PVCs binding to thin LVs on a thin pool. thinpool is 512MiB.
The thin LVs got created.
Now deleted the claims. So thin LVs are deleted.
LVs got deleted
Created a new claim using same PVC yaml. The thin LV is created successfully.
|
Please update if this is still an issue, and if there are any other details that can help reproduce this locally. |
For example, 20GB VG, already have a thinpool, 15GB, and 5GB free. |
I don't think that's a problem here. Please refer below
Now try creating a
|
|
@graphenn Thank you. Two questions:
It'd be helpful if you can share the real outputs that show this behaviour. |
|
It'll depend upon the settings However, I'll see if there is some delay or race in the plugin getting free space details with a lag somewhere. |
Summarising the issue reported:
|
Thanks for the issue and the findings posted above. Further analysis reveals that:
Pending further investigation in v4.3 |
What steps did you take and what happened:
What did you expect to happen:
I expected that the node is reported as having 10GB of free space, since no "real" volumes exist, only the Thin Pool LV.
Otherwise, I can't deploy the same Pod again, event if the disk space is free.
Environment:
The text was updated successfully, but these errors were encountered: