-
Notifications
You must be signed in to change notification settings - Fork 494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect NUMA Node and CPU Pinning During VM Migration #6772
Labels
Comments
feldsam
added a commit
to FELDSAM-INC/one
that referenced
this issue
Nov 4, 2024
…VM Save and Live Migration Signed-off-by: Kristian Feldsam <[email protected]>
feldsam
added a commit
to FELDSAM-INC/one
that referenced
this issue
Nov 4, 2024
Signed-off-by: Kristian Feldsam <[email protected]>
feldsam
added a commit
to FELDSAM-INC/one
that referenced
this issue
Nov 4, 2024
Signed-off-by: Kristian Feldsam <[email protected]>
feldsam
added a commit
to FELDSAM-INC/one
that referenced
this issue
Dec 20, 2024
Signed-off-by: Kristian Feldsam <[email protected]>
feldsam
added a commit
to FELDSAM-INC/one
that referenced
this issue
Dec 20, 2024
Signed-off-by: Kristian Feldsam <[email protected]>
feldsam
added a commit
to FELDSAM-INC/one
that referenced
this issue
Dec 20, 2024
Signed-off-by: Kristian Feldsam <[email protected]>
feldsam
added a commit
to FELDSAM-INC/one
that referenced
this issue
Dec 20, 2024
Signed-off-by: Kristian Feldsam <[email protected]>
Hello @rsmontero @paczerny, I just fixed my code about deleting capacity from previous host and tested it in lab environment. I tested all migration models:
This is related to #6596, where only I also tested bigger VMs, which spans across more numa nodes and I see another bug - only first numa node CPUs are cleared. Should I report this as new issue? Thanks! |
feldsam
added a commit
to FELDSAM-INC/one
that referenced
this issue
Dec 20, 2024
…VM Save and Live Migration Signed-off-by: Kristian Feldsam <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
/!\ To report a security issue please follow this procedure:
[https://github.com/OpenNebula/one/wiki/Vulnerability-Management-Process]
Description
The current implementation for Huge Pages support, as per the enhancement "Support use of huge pages without CPU pinning #6185," selects a NUMA node based on free resources. The scheduling mechanism effectively balances load across NUMA nodes. However, issues arise during VM migration, leading to inconsistencies.
To Reproduce
Expected behavior
Details
Additional context
Progress Status
The text was updated successfully, but these errors were encountered: