Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi_vms_with_stress: add a test about starting VMs with stress workload on host #5934

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

rh-jugraham
Copy link
Contributor

@rh-jugraham rh-jugraham commented Oct 8, 2024

Case ID: VIRT-301893

Automates the case that tests that multiple vms can use #host_only_cpu to start when stress workload is running on the host.

Test steps:
1. Prepare 3 vms that each have even vcpu number around 2/3 of # host_online_cpu
2. Start stress workload on the host
3. Start all vms and verify vms could be logged in normally
4. Verify all vms could be gracefully shutdown successfully

Evidence of tests passing:

(.libvirt-ci-venv-ci-runtest-TSPbXe) [root@ampere-mtsnow-altramax-31 ~]# avocado run --vt-type libvirt multi_vms_with_stress
No python imaging library installed. Screendump and Windows guest BSOD detection are disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
No python imaging library installed. PPM image conversion to JPEG disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
No python imaging library installed. Screendump and Windows guest BSOD detection are disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
No python imaging library installed. PPM image conversion to JPEG disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
JOB ID     : f8b4b60d59e7cf3d867104d46833fe6c8eb9e868
JOB LOG    : /var/log/avocado/job-results/job-2024-10-24T10.41-f8b4b60/job.log
 (1/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: STARTED
 (1/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: PASS (135.31 s)
 (2/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: STARTED
 (2/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: PASS (135.70 s)
 (3/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: STARTED
 (3/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: PASS (135.55 s)
 (4/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: STARTED
 (4/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: PASS (138.33 s)
 (5/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: STARTED
 (5/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: PASS (138.49 s)
 (6/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: STARTED
 (6/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: PASS (137.15 s)
 (7/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: STARTED
 (7/7) type_specific.io-github-autotest-libvirt.multi_vms_with_stress: PASS (138.84 s)
RESULTS    : PASS 7 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML   : /var/log/avocado/job-results/job-2024-10-24T10.41-f8b4b60/results.html
JOB TIME   : 976.49 s

@rh-jugraham rh-jugraham changed the title Multi vms with stress Multi_vms_with_stress: ensure vms can start with stress workload running on host Oct 24, 2024
@rh-jugraham rh-jugraham marked this pull request as ready for review October 24, 2024 15:09
@rh-jugraham
Copy link
Contributor Author

@Yingshun Could you review this PR? Thanks!

@Yingshun
Copy link
Contributor

@nanli1 Could you plz review this PR? Thanks!

nanli1

This comment was marked as abuse.

nanli1

This comment was marked as duplicate.

nanli1

This comment was marked as duplicate.

Comment on lines 37 to 38
vmxml.memory = int(memory)
vmxml.current_mem = int(memory)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could I ask the reason for increasing memory

Copy link
Contributor Author

@rh-jugraham rh-jugraham Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without the increase, the test tends to fail, with the serial logs indicating "Out of memory: Killed process ........". Increasing the memory (in this case this is around double the default to around 4GB) ensures that the test consistently passes.

Comment on lines 51 to 52
if (vcpus_num % 2 != 0):
vcpus_num += 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please tell why do we get this vcpus_num to plus 1 when the vcpus_num %2 != 0:D

Copy link
Contributor Author

@rh-jugraham rh-jugraham Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The automation case mentions "Prepare 3 vms and each vm has even vcpus number which is about 2/3 of # host_online_cpu", hence why I decided to add 1 if the math turns out odd to ensure even vcpus. The case also comes from feature sync with qemu-kvm (multi_vms_with_stress), and that is where I copied this particular if statement from.

@nanli1
Copy link
Contributor

nanli1 commented Oct 30, 2024

For the commit msg , Could you please give more detail , for example 855e516

@rh-jugraham rh-jugraham changed the title Multi_vms_with_stress: ensure vms can start with stress workload running on host Multi_vms_with_stress: Add a test about starting VMs with stress workload on host Oct 30, 2024
@rh-jugraham rh-jugraham changed the title Multi_vms_with_stress: Add a test about starting VMs with stress workload on host Multi_vms_with_stress: add a test about starting VMs with stress workload on host Oct 30, 2024
…kload on host

This PR adds:
	VIRT-301893 - [aarch64 only] Start VMs with maximum vcpus and stress on host

Signed-off-by: Julia Graham <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants