-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(storage/linstor): Enhance multi-disk support and provisioning flexibility in Linstor SR tests #270
base: master
Are you sure you want to change the base?
feat(storage/linstor): Enhance multi-disk support and provisioning flexibility in Linstor SR tests #270
Conversation
Adding a new CLI parameter such as Couldn't we have test definitions that do both |
We can do that. Since this is XOSTOR level, one full cycle (create-test-destroy) with Can we retain the CLI option for those callers who want to selectively run in one of the mode? |
In theory pytest already offers enough selectors to execute a set of tests and not others. |
eb87c3d
to
c5369b7
Compare
989afea
to
5b0e1b7
Compare
'redundancy': '1', | ||
'provisioning': 'thin' | ||
'group-name': storage_pool_name, | ||
'redundancy': '2', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific reason to set 2 in your tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a strange issue with thick
SR and redundancy
set as 1
. The error looks like below upon trying to create a VM (VDI). So, keeping it as 2
for now, so that other tests on thick
SR can pass.
***** LINSTOR resources on XCP-ng: EXCEPTION <class 'xs_errors.SROSError'>, VDI Creation failed [opterr=error cmd: `['/usr/bin/vhd-util', 'create', '--debug', '-n', '/dev/drbd/by-res/xcp-volume-d21c6a82-ed29-4106-ace6-7542c334254c/0', '-s', '2048', '-S', '2097152']`, code: `30`, reason: `/dev/drbd/by-res/xcp-volume-d21c6a82-ed29-4106-ace6-7542c334254c/0: failed to create: -30` (openers: {'xcp-ng-xs1': {}})]
Feb 18 19:02:59 xcp-ng-xs1 SM: [46577] File "/opt/xensource/sm/LinstorSR", line 1743, in create
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we must handle this error in the driver itself. In theory a simple busy loop should fix that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, will keep it as 1
.
5b0e1b7
to
bb5047e
Compare
bb5047e
to
710279c
Compare
Will this PR involve changes in linstor job definitions in |
After review of Earlier we had
|
|
||
if "auto" not in disks: | ||
# Validate that all specified disks exist on the master host | ||
for disk in disks: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You use disks
as a list (which it is) but it only give multiple members when called independently.
Meaning --sr-disk=nvme1n1 --sr-disk=nvme2n1
result in ["nvme1n1", "nvme2n1"]
.
But a call like --sr-disk=nvme1n1,nvme2n1
result in ["nvme1n1,nvme2n1"]
.
We expect the list on the command line to be given in the latter way, you will need to split the member of disks.
You can look at the hosts
fixture that do the same things with its input.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can update code and handle disks for --sr-disk=nvme1n1,nvme2n1
scenario.
However, it seemed sr-disk
is used in other places as well and did not want to break/alter that for compatibility.
Also, if we are handling multiple disks in one option then the option will become --sr-disks
instead of --sr-disk
on same line as hosts
. And this will cause ripple effect at other places.. The current code will gracefully handle both cases, so can I put a different PR for this (--sr-disks
) or you prefer to change in current one?
In jobs.py
, "--sr-disk": "auto"
is used which lets the current code pick multiple available disks and stay intact.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's mostly used in auto, it's just that if we are expected to give multiple disks to a test, using --sr-disk
with comma separated value make more sense.
If we use multiple value, we expect the test to be run multiple time using the values one after the other like it is doing with --vm
.
For this case, we don't have to go further than having comma-separated value working on sr-disk
for your new fixture and (maybe) ignore the other value after the first disk for the already existing fixtures using sr-disk
expecting only one disk.
For renaming the parameter completely, I'm summoning @Wescoeur and @stormi to give their idea on the matter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the benefit of parsing a comma-separated list, whereas pytest
already gives us a way to specify --sr-disk=nvme1n1 --sr-disk=nvme2n1
?
…exibility in Linstor SR tests - Added `sr_disks_for_all_hosts` fixture to support multiple disks, ensuring availability across all hosts and handling "auto" selection dynamically. - Updated `lvm_disks` (`lvm_disk`) fixture to provision multiple disks collectively, refining vgcreate and pvcreate logic. - Introduced `provisioning_type` and `storage_pool_name` fixtures to dynamically configure storage provisioning (thin or thick). - Refactored Linstor SR test cases to use the new fixtures, improving test coverage across provisioning types. - Optimized Linstor installation and cleanup using concurrent execution, reducing setup time. - Enhanced validation and logging for disk selection. Signed-off-by: Rushikesh Jadhav <[email protected]>
710279c
to
7a712e6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lvm_disks
part looks like a good candidate to move into a separate first commit
sr_disks_for_all_hosts
fixture to support multiple disks, ensuring availability across all hosts and handling "auto" selection dynamically.lvm_disk
fixture to provision multiple disks collectively, refining vgcreate and pvcreate logic.provisioning_type
andstorage_pool_name
fixtures to dynamically configure storage provisioning (thin or thick).