Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minimise Baremetal footprint, live-iso bootstrap #361
Minimise Baremetal footprint, live-iso bootstrap #361
Changes from all commits
6857611
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see two separate improvements here:
Maybe it's fair to say that (2) is the more interesting new feature, and it requires (1).
However, I think we could include a user story for (1) on its own:
i.e. booting an RHCOS live ISO would be easier than installing RHEL on the machine? Not needing a bootstrap VM means the machine you boot with this ISO could be a VM?
Related, if you're not installing a "compat 3 node cluster", wouldn't it be better to avoid the bootstrap-pivot-to-master and instead do bootstrap-pivot-to-worker once the install has fully completed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, good points. In the past @crawford has said that pivot-to-worker may have some security risk - Alex can you please comment on that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the issue according to @crawford is that this worker node might still get traffic meant for the API VIP after the pivot, for example do to a load balancer configuration not being updated. Then some malicious actor could run a MCS on that node and load other software onto additional nodes being deployed. I don't think that this would be possible because pods would be on a different network, but I'm not sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that sums it up. If we can pivot to a control plane node or a compute node (and we've proven we can), it's easier for us and the customer to just avoid the potential issues and only pivot to the control plane node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That also implies to me that we don't see any downsides to pivoting the bootstrap machine to be a control plane node - like difficulty debugging/recovering if the pivot fails. Is that the case?
(i.e. if there are downsides to this bootstrap-to-master pivot, then that would be a reason to choose to pivot to a worker, in cases where we have a choice)
In our discussions about running the assisted service and UI in standalone/disconnected cases, we kept coming back to the negative implications of bootstrap-to-master as a reason to not run that stuff on the bootstrap node. That's what got me thinking about whether bootstrap-to-master was a thing we want to use in cases where that's our only choices, or a thing we want to use in all cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
main benefit of bootstrap-to-master is it covers both use cases (with and without workers). If we enable bootstrap-to-worker we are potentially doubling the testing matrix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't arrive at that conclusion at all while reading this enhancement but it's certainly possible. If we're intending for the live iso run the installer binary and start bootstrapping services while running the live-iso can we detail more of that aspect somewhere? I guess perhaps that came from ambiguity around "installer/bootstrap services" in line 75?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a user story to capture the "improve IPI day-1 experience" and will work on adding some more detail around running the bootstrap services on the live ISO (which is clearly a key part of the implementation here, and has already been prototyped - @avishayt can you perhaps help with some more details?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe Alex mentioned that it isn't good practice to run the bootstrap logic on a host that will become a worker, because the API VIP shouldn't ever be on a host that will later serve user workloads. If that's indeed the case, an extra host is needed for deployments with workers as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that means we'd always reboot the bootstrap host to be the 3rd master, and never to become a worker? It doesn't necessarily follow that an extra host is required though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean that we limited this proposal to 3-node clusters, but it's beneficial for any size cluster. Today if you want 3 masters and 2 workers you will need 6 hosts, and with this proposal only 5.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see, thanks - I'll clarify that.
Would be interesting to hear from @crawford re the API VIP best practice though, as AFAIK that would work OK with our current keepalived solution (the VIP would just move over to one of the masters as soon as the API comes up there)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the issue according to @crawford is that this worker node might still get traffic meant for the API VIP after the pivot, for example do to a load balancer configuration not being updated. Then some malicious actor could run a MCS on that node and load other software onto additional nodes being deployed. I don't think that this would be possible because pods would be on a different network, but I'm not sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cluster-etcd-operator
does not currently support installs of less than 3 master nodes. We would like to add support for this use case explicitly. But we struggle to understand what would be the clear signal from the installer defining thisinstall type
. As we continue to add support for these nonstandardinstall types
control-plane (possibly other) components will likely need to tolerate some change to logic. The intention of the installer should be clear to the operator. I am wondering how we can best handle this problem?cc @crawford @sdodson
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @ironcladlou @retroflexer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hexfusion Since the cluster will only be in 2-master mode until the bootstrap host pivots, can this be considered the same way as a master-replacement? I guess in an HA configuration if a master fails, there's no signal to the operator (other than a master going away), so we'd expect etcd things to just work but in a degraded state, until the 3rd master comes up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hexfusion , @crawford , @sdodson , @ironcladlou , @retroflexer : Is this item about the etcd operator the only open question about this design enhancement? Are we ready to approve it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the impact of this change on the current integration with ACM? That expects to use Hive to run the installer in a pod in the ACM cluster, but it seems that will need to change to run something that attaches the live ISO to one of the hosts instead? I don't think we need to work out all of the details here, but we should at least point out that if we make this change in a way that isn't backwards compatible then we will break the existing ACM integration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dhellmann that is a good point - I suspect that means that at least short/medium term the existing bootstrap VM solution would still be required, and we have the same question to answer re any ACM integration with the assisted-install flow?
I've been wondering whether this proposal could be simplified by not considering the "run installer on the live-iso" part, and instead prepare a cluster-specific ISO the user can then boot e.g
openshift-install create boostrap-iso
?That would still imply two possible install paths though, the bootstrap VM case or the alternative based on creation of a bootstrap ISO without dependencies on libvirt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if anyone has started thinking deeply about integrating the assisted installer with ACM, or what that means. Perhaps the integration is just running the assisted installer on the ACM cluster and linking to it from the ACM GUI? Perhaps it doesn't make sense to integrate the assisted installer with ACM at all, since the assisted installer doesn't automate things like the IPI installer does and the point of ACM is to have that automation?
ACM assumes the installer manages the entire process. If we change the installer to generate an ISO to replace the bootstrap VM, then we would have to do something somewhere to attach that ISO to the host and boot it. I think to accomplish that, we would end up moving a lot of the features of the IPI installer into some new controller in ACM, and in the process we might also end up with a different path for ACM's integration with the installer because Hive wouldn't know how to drive the tool to mount the ISO.
So as far as I can tell, we're going to have 2 paths somewhere, regardless of what we do.