-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update volume status after every reconcile loop #144
update volume status after every reconcile loop #144
Conversation
55b37bc
to
b712690
Compare
PR was updated and rebased |
if len(volumes) != 0 { | ||
requireUpdate = true | ||
machine.Status.VolumeStatus = volumes | ||
} | ||
|
||
if len(nics) != 0 { | ||
requireUpdate = true | ||
machine.Status.NetworkInterfaceStatus = nics | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not get the point of this code. How is the length of Volumes
and NIC
s related to updates?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, you have right comparison for nil will be sufficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
currentStatus := machine.Status.GetVolumesAsMap() | ||
|
||
attachedVolumes, err := attacher.ListVolumesAsMap() | ||
if err != nil { | ||
// missing list of attached volumes can affect deletion of volumes, so we cannot continue | ||
return machine.Status.VolumeStatus, err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's a bad practice returning "data" in an error case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here we can return nil, it isn't problem with new update status functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
return nil, fmt.Errorf("error iterating mounted volumes: %w", err) | ||
syncStatusWithAttachedVolumes(currentStatus, attachedVolumes) | ||
|
||
errs := r.deleteDetachedVolumes(ctx, log, currentStatus, mounter) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is happening here? Why do we just continue in case of errors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
functions are designed as failure tolerant. You have multiple logic on one place where one functionality can affect another (detach, delete, resize, attach). So one problematic disk cannot affect another and disks are reconciled based on their current status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function was reverted to previous implementation which was failure tolerant too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to subscribe to the LifeCycleEvents
of the libvirt Domain and enqueue Machines if something changes in the domain.
Here is some sample code how this might look like:
// Subscribe to lifecycle events using the provided context
eventChan, err := l.LifecycleEvents(ctx)
if err != nil {
fmt.Println("Failed to subscribe to lifecycle events:", err)
return
}
fmt.Println("Subscribed to lifecycle events. Listening for events...")
// Listen for lifecycle events in a separate goroutine
go func() {
for event := range eventChan {
fmt.Printf("Received lifecycle event: %+v\n", event)
// enqueue Machines here and switch over the events
}
fmt.Println("Lifecycle event channel closed.")
}()
// Wait for context to be cancelled (e.g., by receiving a signal)
<-ctx.Done()
fmt.Println("Context cancelled. Shutting down...")
This can be also used to consolidate other go routines
which we are starting during the initialization of the provider.
@afritzler we discussed implement listen to libvirt events. And we want to do that but not now. |
850ff8f
to
9fe64cd
Compare
- extend api.VolumeStatus - rewrite delete/detach logic - rewrite update of status
9fe64cd
to
3fe8959
Compare
…-unplugged-from-domain
…-unplugged-from-domain
…-unplugged-from-domain
…-unplugged-from-domain
…-unplugged-from-domain
…-unplugged-from-domain
…-unplugged-from-domain
…-unplugged-from-domain
if err := attacher.ForEachVolume(func(volume *AttachVolume) bool { | ||
currentVolumeNames.Insert(volume.Name) | ||
return true | ||
}); err != nil { | ||
return nil, fmt.Errorf("error iterating attached volumes: %w", err) | ||
return nil, fmt.Errorf("error iteratin attached volumes: %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo
problems solve in this PR will solve with events and different logic. |
Proposed Changes
Rewrite detach/attach logic for using of status as source of truth.
Fixes #114
Fixes #145