Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workers are not getting deployed as part of the installation process #52

Open
gfysaris opened this issue Feb 14, 2022 · 7 comments
Open

Comments

@gfysaris
Copy link

Hello,
We are trying to deployed the required AWS Resources for an UPI installation of CP4I solution in our environment.
Using the terraform code provided by this repo, we manage to execute a successful deployement (as far as terraform is concerned) and we getting a working bootstrap node and 3 masters nodes. Also the 3 loadbalancers are deployed as well.

But we do not see the 3 workers deployed.
Could you please help us understand the issue with our deployment?

Thank you

@Praveenmail2him
Copy link

Can you please post your log files?

@gfysaris
Copy link
Author

Of course, more than happy to provide any information needed.
Since there are multiple logs, could you please let me know which one would be more relevant for you?

@Praveenmail2him
Copy link

.openshift-install.log would help me out to figure out.

@gfysaris
Copy link
Author

terraform.tfvars

cluster_name          = "ocp4"
openshift_pull_secret = "./openshift_pull_secret.json"
openshift_version     = "4.6.28"

aws_extra_tags = {
  "owner" = "admin"
}

aws_region = "eu-west-1"
aws_publish_strategy = "External"
base_domain = "001.external.ocp.xxx.demos.aws.xxx.xxx"

.openshift_install.log

time="2022-02-13T14:46:15Z" level=debug msg="OpenShift Installer 4.6.28"
time="2022-02-13T14:46:15Z" level=debug msg="Built from commit c47fb1296122a601bc578b9251ba1fb3c7dd4fd1"
time="2022-02-13T14:46:15Z" level=debug msg="Fetching Master Machines..."
time="2022-02-13T14:46:15Z" level=debug msg="Loading Master Machines..."
time="2022-02-13T14:46:15Z" level=debug msg="  Loading Cluster ID..."
time="2022-02-13T14:46:15Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:15Z" level=debug msg="      Loading SSH Key..."
time="2022-02-13T14:46:15Z" level=debug msg="      Loading Base Domain..."
time="2022-02-13T14:46:15Z" level=debug msg="        Loading Platform..."
time="2022-02-13T14:46:15Z" level=debug msg="      Loading Cluster Name..."
time="2022-02-13T14:46:15Z" level=debug msg="        Loading Base Domain..."
time="2022-02-13T14:46:15Z" level=debug msg="        Loading Platform..."
time="2022-02-13T14:46:15Z" level=debug msg="      Loading Pull Secret..."
time="2022-02-13T14:46:15Z" level=debug msg="      Loading Platform..."
time="2022-02-13T14:46:15Z" level=info msg="Credentials loaded from default AWS environment variables"
time="2022-02-13T14:46:16Z" level=debug msg="    Using Install Config loaded from target directory"
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Platform Credentials Check..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Image..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Master Ignition Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Root CA..."
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Cluster ID..."
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:16Z" level=debug msg="  Generating Cluster ID..."
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Platform Credentials Check..."
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:16Z" level=debug msg="  Generating Platform Credentials Check..."
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Reusing previously-fetched Install Config"
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Image..."
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:16Z" level=debug msg="  Generating Image..."
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Master Ignition Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Root CA..."
time="2022-02-13T14:46:16Z" level=debug msg="    Generating Root CA..."
time="2022-02-13T14:46:16Z" level=debug msg="  Generating Master Ignition Config..."
time="2022-02-13T14:46:16Z" level=debug msg="Generating Master Machines..."
time="2022-02-13T14:46:16Z" level=info msg="Consuming Install Config from target directory"
time="2022-02-13T14:46:16Z" level=debug msg="Purging asset \"Install Config\" from disk"
time="2022-02-13T14:46:16Z" level=debug msg="Fetching Worker Machines..."
time="2022-02-13T14:46:16Z" level=debug msg="Loading Worker Machines..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Cluster ID..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Platform Credentials Check..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Image..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Worker Ignition Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Root CA..."
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Cluster ID..."
time="2022-02-13T14:46:16Z" level=debug msg="  Reusing previously-fetched Cluster ID"
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Platform Credentials Check..."
time="2022-02-13T14:46:16Z" level=debug msg="  Reusing previously-fetched Platform Credentials Check"
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Reusing previously-fetched Install Config"
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Image..."
time="2022-02-13T14:46:16Z" level=debug msg="  Reusing previously-fetched Image"
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Worker Ignition Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Root CA..."
time="2022-02-13T14:46:16Z" level=debug msg="    Reusing previously-fetched Root CA"
time="2022-02-13T14:46:16Z" level=debug msg="  Generating Worker Ignition Config..."
time="2022-02-13T14:46:16Z" level=debug msg="Generating Worker Machines..."
time="2022-02-13T14:46:16Z" level=debug msg="Fetching Common Manifests..."
time="2022-02-13T14:46:16Z" level=debug msg="Loading Common Manifests..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Cluster ID..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Ingress Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading DNS Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Cluster ID..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Platform Credentials Check..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Infrastructure Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Cluster ID..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Cloud Provider Config..."
time="2022-02-13T14:46:16Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="      Loading Cluster ID..."
time="2022-02-13T14:46:16Z" level=debug msg="      Loading Platform Credentials Check..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Additional Trust Bundle Config..."
time="2022-02-13T14:46:16Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Network Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Network CRDs..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Proxy Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Network Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Scheduler Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Image Content Source Policy..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Root CA..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Certificate (etcd-signer)..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Certificate (etcd-ca-bundle)..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Certificate (etcd-signer)..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Certificate (etcd-client)..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Certificate (etcd-signer)..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Certificate (etcd-metric-ca-bundle)..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Certificate (etcd-metric-signer-client)..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading Certificate (mcs)..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Root CA..."
time="2022-02-13T14:46:16Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading CVOOverrides..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading EtcdCAConfigMap..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading EtcdClientSecret..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading EtcdMetricClientSecret..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading EtcdMetricServingCAConfigMap..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading EtcdMetricSignerSecret..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading EtcdNamespace..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading EtcdService..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading EtcdSignerSecret..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading KubeCloudConfig..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading EtcdServingCAConfigMap..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading KubeSystemConfigmapRootCA..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading MachineConfigServerTLSSecret..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading OpenshiftConfigSecretPullSecret..."
time="2022-02-13T14:46:16Z" level=debug msg="  Loading OpenshiftMachineConfigOperator..."
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Cluster ID..."
time="2022-02-13T14:46:16Z" level=debug msg="  Reusing previously-fetched Cluster ID"
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Reusing previously-fetched Install Config"
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching Ingress Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:16Z" level=debug msg="  Generating Ingress Config..."
time="2022-02-13T14:46:16Z" level=debug msg="  Fetching DNS Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:16Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Cluster ID..."
time="2022-02-13T14:46:16Z" level=debug msg="    Reusing previously-fetched Cluster ID"
time="2022-02-13T14:46:16Z" level=debug msg="    Fetching Platform Credentials Check..."
time="2022-02-13T14:46:16Z" level=debug msg="    Reusing previously-fetched Platform Credentials Check"
time="2022-02-13T14:46:16Z" level=debug msg="  Generating DNS Config..."
time="2022-02-13T14:46:17Z" level=debug msg="  Fetching Infrastructure Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Cluster ID..."
time="2022-02-13T14:46:17Z" level=debug msg="    Reusing previously-fetched Cluster ID"
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Cloud Provider Config..."
time="2022-02-13T14:46:17Z" level=debug msg="      Fetching Install Config..."
time="2022-02-13T14:46:17Z" level=debug msg="      Reusing previously-fetched Install Config"
time="2022-02-13T14:46:17Z" level=debug msg="      Fetching Cluster ID..."
time="2022-02-13T14:46:17Z" level=debug msg="      Reusing previously-fetched Cluster ID"
time="2022-02-13T14:46:17Z" level=debug msg="      Fetching Platform Credentials Check..."
time="2022-02-13T14:46:17Z" level=debug msg="      Reusing previously-fetched Platform Credentials Check"
time="2022-02-13T14:46:17Z" level=debug msg="    Generating Cloud Provider Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Additional Trust Bundle Config..."
time="2022-02-13T14:46:17Z" level=debug msg="      Fetching Install Config..."
time="2022-02-13T14:46:17Z" level=debug msg="      Reusing previously-fetched Install Config"
time="2022-02-13T14:46:17Z" level=debug msg="    Generating Additional Trust Bundle Config..."
time="2022-02-13T14:46:17Z" level=debug msg="  Generating Infrastructure Config..."
time="2022-02-13T14:46:17Z" level=debug msg="  Fetching Network Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Network CRDs..."
time="2022-02-13T14:46:17Z" level=debug msg="    Generating Network CRDs..."
time="2022-02-13T14:46:17Z" level=debug msg="  Generating Network Config..."
time="2022-02-13T14:46:17Z" level=debug msg="  Fetching Proxy Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Network Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Reusing previously-fetched Network Config"
time="2022-02-13T14:46:17Z" level=debug msg="  Generating Proxy Config..."
time="2022-02-13T14:46:17Z" level=debug msg="  Fetching Scheduler Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:17Z" level=debug msg="  Generating Scheduler Config..."
time="2022-02-13T14:46:17Z" level=debug msg="  Fetching Image Content Source Policy..."
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:17Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:17Z" level=debug msg="  Generating Image Content Source Policy..."
time="2022-02-13T14:46:17Z" level=debug msg="  Fetching Root CA..."
time="2022-02-13T14:46:17Z" level=debug msg="  Reusing previously-fetched Root CA"
time="2022-02-13T14:46:17Z" level=debug msg="  Fetching Certificate (etcd-signer)..."
time="2022-02-13T14:46:17Z" level=debug msg="  Generating Certificate (etcd-signer)..."
time="2022-02-13T14:46:17Z" level=debug msg="  Fetching Certificate (etcd-ca-bundle)..."
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Certificate (etcd-signer)..."
time="2022-02-13T14:46:17Z" level=debug msg="    Reusing previously-fetched Certificate (etcd-signer)"
time="2022-02-13T14:46:17Z" level=debug msg="  Generating Certificate (etcd-ca-bundle)..."
time="2022-02-13T14:46:17Z" level=debug msg="  Fetching Certificate (etcd-client)..."
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Certificate (etcd-signer)..."
time="2022-02-13T14:46:17Z" level=debug msg="    Reusing previously-fetched Certificate (etcd-signer)"
time="2022-02-13T14:46:17Z" level=debug msg="  Generating Certificate (etcd-client)..."
time="2022-02-13T14:46:17Z" level=debug msg="  Fetching Certificate (etcd-metric-ca-bundle)..."
time="2022-02-13T14:46:17Z" level=debug msg="    Fetching Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:17Z" level=debug msg="    Generating Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating Certificate (etcd-metric-ca-bundle)..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:18Z" level=debug msg="  Reusing previously-fetched Certificate (etcd-metric-signer)"
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching Certificate (etcd-metric-signer-client)..."
time="2022-02-13T14:46:18Z" level=debug msg="    Fetching Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:18Z" level=debug msg="    Reusing previously-fetched Certificate (etcd-metric-signer)"
time="2022-02-13T14:46:18Z" level=debug msg="  Generating Certificate (etcd-metric-signer-client)..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching Certificate (mcs)..."
time="2022-02-13T14:46:18Z" level=debug msg="    Fetching Root CA..."
time="2022-02-13T14:46:18Z" level=debug msg="    Reusing previously-fetched Root CA"
time="2022-02-13T14:46:18Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:18Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:18Z" level=debug msg="  Generating Certificate (mcs)..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching CVOOverrides..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating CVOOverrides..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching EtcdCAConfigMap..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating EtcdCAConfigMap..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching EtcdClientSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating EtcdClientSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching EtcdMetricClientSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating EtcdMetricClientSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching EtcdMetricServingCAConfigMap..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating EtcdMetricServingCAConfigMap..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching EtcdMetricSignerSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating EtcdMetricSignerSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching EtcdNamespace..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating EtcdNamespace..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching EtcdService..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating EtcdService..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching EtcdSignerSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating EtcdSignerSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching KubeCloudConfig..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating KubeCloudConfig..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching EtcdServingCAConfigMap..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating EtcdServingCAConfigMap..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching KubeSystemConfigmapRootCA..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating KubeSystemConfigmapRootCA..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching MachineConfigServerTLSSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating MachineConfigServerTLSSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching OpenshiftConfigSecretPullSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating OpenshiftConfigSecretPullSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching OpenshiftMachineConfigOperator..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating OpenshiftMachineConfigOperator..."
time="2022-02-13T14:46:18Z" level=debug msg="Generating Common Manifests..."
time="2022-02-13T14:46:18Z" level=debug msg="Fetching Openshift Manifests..."
time="2022-02-13T14:46:18Z" level=debug msg="Loading Openshift Manifests..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading Install Config..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading Cluster ID..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading Kubeadmin Password..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading OpenShift Install (Manifests)..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading CloudCredsSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading KubeadminPasswordSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading RoleCloudCredsSecretReader..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading Private Cluster Outbound Service..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading Baremetal Config CR..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading Image..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching Install Config..."
time="2022-02-13T14:46:18Z" level=debug msg="  Reusing previously-fetched Install Config"
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching Cluster ID..."
time="2022-02-13T14:46:18Z" level=debug msg="  Reusing previously-fetched Cluster ID"
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching Kubeadmin Password..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating Kubeadmin Password..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching OpenShift Install (Manifests)..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating OpenShift Install (Manifests)..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching CloudCredsSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating CloudCredsSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching KubeadminPasswordSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating KubeadminPasswordSecret..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching RoleCloudCredsSecretReader..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating RoleCloudCredsSecretReader..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching Private Cluster Outbound Service..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating Private Cluster Outbound Service..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching Baremetal Config CR..."
time="2022-02-13T14:46:18Z" level=debug msg="  Generating Baremetal Config CR..."
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching Image..."
time="2022-02-13T14:46:18Z" level=debug msg="  Reusing previously-fetched Image"
time="2022-02-13T14:46:18Z" level=debug msg="Generating Openshift Manifests..."
time="2022-02-13T14:46:18Z" level=info msg="Manifests created in: installer-files/temp/manifests and installer-files/temp/openshift"
time="2022-02-13T14:46:18Z" level=debug msg="OpenShift Installer 4.6.28"
time="2022-02-13T14:46:18Z" level=debug msg="Built from commit c47fb1296122a601bc578b9251ba1fb3c7dd4fd1"
time="2022-02-13T14:46:18Z" level=debug msg="Fetching Kubeconfig Admin Client..."
time="2022-02-13T14:46:18Z" level=debug msg="Loading Kubeconfig Admin Client..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading Certificate (admin-kubeconfig-client)..."
time="2022-02-13T14:46:18Z" level=debug msg="    Loading Certificate (admin-kubeconfig-signer)..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading Certificate (kube-apiserver-complete-server-ca-bundle)..."
time="2022-02-13T14:46:18Z" level=debug msg="    Loading Certificate (kube-apiserver-localhost-ca-bundle)..."
time="2022-02-13T14:46:18Z" level=debug msg="      Loading Certificate (kube-apiserver-localhost-signer)..."
time="2022-02-13T14:46:18Z" level=debug msg="    Loading Certificate (kube-apiserver-service-network-ca-bundle)..."
time="2022-02-13T14:46:18Z" level=debug msg="      Loading Certificate (kube-apiserver-service-network-signer)..."
time="2022-02-13T14:46:18Z" level=debug msg="    Loading Certificate (kube-apiserver-lb-ca-bundle)..."
time="2022-02-13T14:46:18Z" level=debug msg="      Loading Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:18Z" level=debug msg="  Loading Install Config..."
time="2022-02-13T14:46:18Z" level=debug msg="    Loading SSH Key..."
time="2022-02-13T14:46:18Z" level=debug msg="    Loading Base Domain..."
time="2022-02-13T14:46:18Z" level=debug msg="      Loading Platform..."
time="2022-02-13T14:46:18Z" level=debug msg="    Loading Cluster Name..."
time="2022-02-13T14:46:18Z" level=debug msg="      Loading Base Domain..."
time="2022-02-13T14:46:18Z" level=debug msg="      Loading Platform..."
time="2022-02-13T14:46:18Z" level=debug msg="    Loading Pull Secret..."
time="2022-02-13T14:46:18Z" level=debug msg="    Loading Platform..."
time="2022-02-13T14:46:18Z" level=debug msg="  Using Install Config loaded from state file"
time="2022-02-13T14:46:18Z" level=debug msg="  Fetching Certificate (admin-kubeconfig-client)..."
time="2022-02-13T14:46:18Z" level=debug msg="    Fetching Certificate (admin-kubeconfig-signer)..."
time="2022-02-13T14:46:18Z" level=debug msg="    Generating Certificate (admin-kubeconfig-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Generating Certificate (admin-kubeconfig-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Fetching Certificate (kube-apiserver-complete-server-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Fetching Certificate (kube-apiserver-localhost-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Fetching Certificate (kube-apiserver-localhost-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Generating Certificate (kube-apiserver-localhost-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Generating Certificate (kube-apiserver-localhost-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Fetching Certificate (kube-apiserver-service-network-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Fetching Certificate (kube-apiserver-service-network-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Generating Certificate (kube-apiserver-service-network-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Generating Certificate (kube-apiserver-service-network-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Fetching Certificate (kube-apiserver-lb-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Fetching Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Generating Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Generating Certificate (kube-apiserver-lb-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Generating Certificate (kube-apiserver-complete-server-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Fetching Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Reusing previously-fetched Install Config"
time="2022-02-13T14:46:19Z" level=debug msg="Generating Kubeconfig Admin Client..."
time="2022-02-13T14:46:19Z" level=debug msg="Fetching Kubeadmin Password..."
time="2022-02-13T14:46:19Z" level=debug msg="Loading Kubeadmin Password..."
time="2022-02-13T14:46:19Z" level=debug msg="Using Kubeadmin Password loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="Reusing previously-fetched Kubeadmin Password"
time="2022-02-13T14:46:19Z" level=debug msg="Fetching Master Ignition Config..."
time="2022-02-13T14:46:19Z" level=debug msg="Loading Master Ignition Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Root CA..."
time="2022-02-13T14:46:19Z" level=debug msg="  Using Root CA loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="Using Master Ignition Config loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="Reusing previously-fetched Master Ignition Config"
time="2022-02-13T14:46:19Z" level=debug msg="Fetching Worker Ignition Config..."
time="2022-02-13T14:46:19Z" level=debug msg="Loading Worker Ignition Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Root CA..."
time="2022-02-13T14:46:19Z" level=debug msg="Using Worker Ignition Config loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="Reusing previously-fetched Worker Ignition Config"
time="2022-02-13T14:46:19Z" level=debug msg="Fetching Bootstrap Ignition Config..."
time="2022-02-13T14:46:19Z" level=debug msg="Loading Bootstrap Ignition Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Kubeconfig Admin Internal Client..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (admin-kubeconfig-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-apiserver-complete-server-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Kubeconfig Kubelet..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-apiserver-complete-server-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kubelet-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (kubelet-bootstrap-kubeconfig-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Kubeconfig Admin Client (Loopback)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (admin-kubeconfig-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-apiserver-localhost-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Master Machines..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Cluster ID..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Cluster ID loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Platform Credentials Check..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Platform Credentials Check loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Image..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Image loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Master Ignition Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Master Machines from both state file and target directory"
time="2022-02-13T14:46:19Z" level=debug msg="  Using Master Machines loaded from target directory"
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Worker Machines..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Cluster ID..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Platform Credentials Check..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Image..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Worker Ignition Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Worker Machines from both state file and target directory"
time="2022-02-13T14:46:19Z" level=debug msg="  Using Worker Machines loaded from target directory"
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Common Manifests..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Cluster ID..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Ingress Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Ingress Config loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading DNS Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Cluster ID..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Platform Credentials Check..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using DNS Config loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Infrastructure Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Cluster ID..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Cloud Provider Config..."
time="2022-02-13T14:46:19Z" level=debug msg="        Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="        Loading Cluster ID..."
time="2022-02-13T14:46:19Z" level=debug msg="        Loading Platform Credentials Check..."
time="2022-02-13T14:46:19Z" level=debug msg="      Using Cloud Provider Config loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Additional Trust Bundle Config..."
time="2022-02-13T14:46:19Z" level=debug msg="        Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Using Additional Trust Bundle Config loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Using Infrastructure Config loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Network Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Network CRDs..."
time="2022-02-13T14:46:19Z" level=debug msg="      Using Network CRDs loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Using Network Config loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Proxy Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Network Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Proxy Config loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Scheduler Config..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Scheduler Config loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Image Content Source Policy..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Image Content Source Policy loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Root CA..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (etcd-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Certificate (etcd-signer) loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (etcd-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (etcd-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Certificate (etcd-ca-bundle) loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (etcd-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (etcd-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Certificate (etcd-client) loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (etcd-metric-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Using Certificate (etcd-metric-signer) loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Using Certificate (etcd-metric-ca-bundle) loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (etcd-metric-signer-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Certificate (etcd-metric-signer-client) loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (mcs)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Root CA..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Certificate (mcs) loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading CVOOverrides..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using CVOOverrides loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading EtcdCAConfigMap..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using EtcdCAConfigMap loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading EtcdClientSecret..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using EtcdClientSecret loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading EtcdMetricClientSecret..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using EtcdMetricClientSecret loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading EtcdMetricServingCAConfigMap..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using EtcdMetricServingCAConfigMap loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading EtcdMetricSignerSecret..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using EtcdMetricSignerSecret loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading EtcdNamespace..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using EtcdNamespace loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading EtcdService..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using EtcdService loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading EtcdSignerSecret..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using EtcdSignerSecret loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading KubeCloudConfig..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using KubeCloudConfig loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading EtcdServingCAConfigMap..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using EtcdServingCAConfigMap loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading KubeSystemConfigmapRootCA..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using KubeSystemConfigmapRootCA loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading MachineConfigServerTLSSecret..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using MachineConfigServerTLSSecret loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading OpenshiftConfigSecretPullSecret..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using OpenshiftConfigSecretPullSecret loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading OpenshiftMachineConfigOperator..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using OpenshiftMachineConfigOperator loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Common Manifests from both state file and target directory"
time="2022-02-13T14:46:19Z" level=debug msg="  On-disk Common Manifests matches asset in state file"
time="2022-02-13T14:46:19Z" level=debug msg="  Using Common Manifests loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Openshift Manifests..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Cluster ID..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Kubeadmin Password..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading OpenShift Install (Manifests)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading OpenShift Install (Manifests) from both state file and target directory"
time="2022-02-13T14:46:19Z" level=debug msg="    On-disk OpenShift Install (Manifests) matches asset in state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Using OpenShift Install (Manifests) loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading CloudCredsSecret..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using CloudCredsSecret loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading KubeadminPasswordSecret..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using KubeadminPasswordSecret loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading RoleCloudCredsSecretReader..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using RoleCloudCredsSecretReader loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Private Cluster Outbound Service..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Private Cluster Outbound Service loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Baremetal Config CR..."
time="2022-02-13T14:46:19Z" level=debug msg="    Using Baremetal Config CR loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Image..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Openshift Manifests from both state file and target directory"
time="2022-02-13T14:46:19Z" level=debug msg="  On-disk Openshift Manifests matches asset in state file"
time="2022-02-13T14:46:19Z" level=debug msg="  Using Openshift Manifests loaded from state file"
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Proxy Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (admin-kubeconfig-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (admin-kubeconfig-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (aggregator)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (aggregator-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (aggregator-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (system:kube-apiserver-proxy)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (aggregator-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (aggregator-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (system:kube-apiserver-proxy)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (aggregator)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Bootstrap SSH Key Pair..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (etcd-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (etcd-metric-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (etcd-metric-signer-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (etcd-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (etcd-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (journal-gatewayd)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Root CA..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-lb-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-external-lb-server)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-internal-lb-server)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-localhost-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-localhost-server)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-apiserver-localhost-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-localhost-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-service-network-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-service-network-server)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-apiserver-service-network-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-service-network-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-complete-server-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-complete-client-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (admin-kubeconfig-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kubelet-client-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (kubelet-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-control-plane-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (kube-control-plane-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (kube-apiserver-localhost-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (kube-apiserver-service-network-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-apiserver-to-kubelet-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (kube-apiserver-to-kubelet-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kubelet-bootstrap-kubeconfig-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Loading Certificate (kubelet-bootstrap-kubeconfig-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-to-kubelet-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-to-kubelet-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-apiserver-to-kubelet-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-apiserver-to-kubelet-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-control-plane-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-control-plane-kube-controller-manager-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-control-plane-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-control-plane-kube-scheduler-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kube-control-plane-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kube-control-plane-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kubelet-bootstrap-kubeconfig-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kubelet-client-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kubelet-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kubelet-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (kubelet-serving-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Loading Certificate (kubelet-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Certificate (mcs)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Root CA..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Key Pair (service-account.pub)..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Release Image Pull Spec..."
time="2022-02-13T14:46:19Z" level=debug msg="  Loading Image..."
time="2022-02-13T14:46:19Z" level=debug msg="  Fetching Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="  Reusing previously-fetched Install Config"
time="2022-02-13T14:46:19Z" level=debug msg="  Fetching Kubeconfig Admin Internal Client..."
time="2022-02-13T14:46:19Z" level=debug msg="    Fetching Certificate (admin-kubeconfig-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Reusing previously-fetched Certificate (admin-kubeconfig-client)"
time="2022-02-13T14:46:19Z" level=debug msg="    Fetching Certificate (kube-apiserver-complete-server-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Reusing previously-fetched Certificate (kube-apiserver-complete-server-ca-bundle)"
time="2022-02-13T14:46:19Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:19Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:19Z" level=debug msg="  Generating Kubeconfig Admin Internal Client..."
time="2022-02-13T14:46:19Z" level=debug msg="  Fetching Kubeconfig Kubelet..."
time="2022-02-13T14:46:19Z" level=debug msg="    Fetching Certificate (kube-apiserver-complete-server-ca-bundle)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Reusing previously-fetched Certificate (kube-apiserver-complete-server-ca-bundle)"
time="2022-02-13T14:46:19Z" level=debug msg="    Fetching Certificate (kubelet-client)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Fetching Certificate (kubelet-bootstrap-kubeconfig-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="      Generating Certificate (kubelet-bootstrap-kubeconfig-signer)..."
time="2022-02-13T14:46:19Z" level=debug msg="    Generating Certificate (kubelet-client)..."
time="2022-02-13T14:46:20Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:20Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:20Z" level=debug msg="  Generating Kubeconfig Kubelet..."
time="2022-02-13T14:46:20Z" level=debug msg="  Fetching Kubeconfig Admin Client (Loopback)..."
time="2022-02-13T14:46:20Z" level=debug msg="    Fetching Certificate (admin-kubeconfig-client)..."
time="2022-02-13T14:46:20Z" level=debug msg="    Reusing previously-fetched Certificate (admin-kubeconfig-client)"
time="2022-02-13T14:46:20Z" level=debug msg="    Fetching Certificate (kube-apiserver-localhost-ca-bundle)..."
time="2022-02-13T14:46:20Z" level=debug msg="    Reusing previously-fetched Certificate (kube-apiserver-localhost-ca-bundle)"
time="2022-02-13T14:46:20Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:20Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:20Z" level=debug msg="  Generating Kubeconfig Admin Client (Loopback)..."
time="2022-02-13T14:46:20Z" level=debug msg="  Fetching Master Machines..."
time="2022-02-13T14:46:20Z" level=debug msg="  Reusing previously-fetched Master Machines"
time="2022-02-13T14:46:20Z" level=debug msg="  Fetching Worker Machines..."
time="2022-02-13T14:46:20Z" level=debug msg="  Reusing previously-fetched Worker Machines"
time="2022-02-13T14:46:20Z" level=debug msg="  Fetching Common Manifests..."
time="2022-02-13T14:46:20Z" level=debug msg="  Reusing previously-fetched Common Manifests"
time="2022-02-13T14:46:20Z" level=debug msg="  Fetching Openshift Manifests..."
time="2022-02-13T14:46:20Z" level=debug msg="  Reusing previously-fetched Openshift Manifests"
time="2022-02-13T14:46:20Z" level=debug msg="  Fetching Proxy Config..."
time="2022-02-13T14:46:20Z" level=debug msg="  Reusing previously-fetched Proxy Config"
time="2022-02-13T14:46:20Z" level=debug msg="  Fetching Certificate (admin-kubeconfig-ca-bundle)..."
time="2022-02-13T14:46:20Z" level=debug msg="    Fetching Certificate (admin-kubeconfig-signer)..."
time="2022-02-13T14:46:20Z" level=debug msg="    Reusing previously-fetched Certificate (admin-kubeconfig-signer)"
time="2022-02-13T14:46:20Z" level=debug msg="  Generating Certificate (admin-kubeconfig-ca-bundle)..."
time="2022-02-13T14:46:20Z" level=debug msg="  Fetching Certificate (aggregator)..."
time="2022-02-13T14:46:20Z" level=debug msg="  Generating Certificate (aggregator)..."
time="2022-02-13T14:46:20Z" level=debug msg="  Fetching Certificate (aggregator-ca-bundle)..."
time="2022-02-13T14:46:20Z" level=debug msg="    Fetching Certificate (aggregator-signer)..."
time="2022-02-13T14:46:20Z" level=debug msg="    Generating Certificate (aggregator-signer)..."
time="2022-02-13T14:46:20Z" level=debug msg="  Generating Certificate (aggregator-ca-bundle)..."
time="2022-02-13T14:46:20Z" level=debug msg="  Fetching Certificate (system:kube-apiserver-proxy)..."
time="2022-02-13T14:46:20Z" level=debug msg="    Fetching Certificate (aggregator-signer)..."
time="2022-02-13T14:46:20Z" level=debug msg="    Reusing previously-fetched Certificate (aggregator-signer)"
time="2022-02-13T14:46:20Z" level=debug msg="  Generating Certificate (system:kube-apiserver-proxy)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (aggregator-signer)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Reusing previously-fetched Certificate (aggregator-signer)"
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (system:kube-apiserver-proxy)..."
time="2022-02-13T14:46:21Z" level=debug msg="    Fetching Certificate (aggregator)..."
time="2022-02-13T14:46:21Z" level=debug msg="    Reusing previously-fetched Certificate (aggregator)"
time="2022-02-13T14:46:21Z" level=debug msg="  Generating Certificate (system:kube-apiserver-proxy)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Bootstrap SSH Key Pair..."
time="2022-02-13T14:46:21Z" level=debug msg="  Generating Bootstrap SSH Key Pair..."
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (etcd-ca-bundle)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Reusing previously-fetched Certificate (etcd-ca-bundle)"
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (etcd-metric-ca-bundle)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Reusing previously-fetched Certificate (etcd-metric-ca-bundle)"
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (etcd-metric-signer)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Reusing previously-fetched Certificate (etcd-metric-signer)"
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (etcd-metric-signer-client)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Reusing previously-fetched Certificate (etcd-metric-signer-client)"
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (etcd-signer)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Reusing previously-fetched Certificate (etcd-signer)"
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (etcd-client)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Reusing previously-fetched Certificate (etcd-client)"
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (journal-gatewayd)..."
time="2022-02-13T14:46:21Z" level=debug msg="    Fetching Root CA..."
time="2022-02-13T14:46:21Z" level=debug msg="    Reusing previously-fetched Root CA"
time="2022-02-13T14:46:21Z" level=debug msg="  Generating Certificate (journal-gatewayd)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (kube-apiserver-lb-ca-bundle)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Reusing previously-fetched Certificate (kube-apiserver-lb-ca-bundle)"
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (kube-apiserver-external-lb-server)..."
time="2022-02-13T14:46:21Z" level=debug msg="    Fetching Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:21Z" level=debug msg="    Reusing previously-fetched Certificate (kube-apiserver-lb-signer)"
time="2022-02-13T14:46:21Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:21Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:21Z" level=debug msg="  Generating Certificate (kube-apiserver-external-lb-server)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (kube-apiserver-internal-lb-server)..."
time="2022-02-13T14:46:21Z" level=debug msg="    Fetching Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:21Z" level=debug msg="    Reusing previously-fetched Certificate (kube-apiserver-lb-signer)"
time="2022-02-13T14:46:21Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:21Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:21Z" level=debug msg="  Generating Certificate (kube-apiserver-internal-lb-server)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Reusing previously-fetched Certificate (kube-apiserver-lb-signer)"
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (kube-apiserver-localhost-ca-bundle)..."
time="2022-02-13T14:46:21Z" level=debug msg="  Reusing previously-fetched Certificate (kube-apiserver-localhost-ca-bundle)"
time="2022-02-13T14:46:21Z" level=debug msg="  Fetching Certificate (kube-apiserver-localhost-server)..."
time="2022-02-13T14:46:21Z" level=debug msg="    Fetching Certificate (kube-apiserver-localhost-signer)..."
time="2022-02-13T14:46:21Z" level=debug msg="    Reusing previously-fetched Certificate (kube-apiserver-localhost-signer)"
time="2022-02-13T14:46:21Z" level=debug msg="  Generating Certificate (kube-apiserver-localhost-server)..."
time="2022-02-13T14:46:22Z" level=debug msg="  Fetching Certificate (kube-apiserver-localhost-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="  Reusing previously-fetched Certificate (kube-apiserver-localhost-signer)"
time="2022-02-13T14:46:22Z" level=debug msg="  Fetching Certificate (kube-apiserver-service-network-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="  Reusing previously-fetched Certificate (kube-apiserver-service-network-ca-bundle)"
time="2022-02-13T14:46:22Z" level=debug msg="  Fetching Certificate (kube-apiserver-service-network-server)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Fetching Certificate (kube-apiserver-service-network-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Reusing previously-fetched Certificate (kube-apiserver-service-network-signer)"
time="2022-02-13T14:46:22Z" level=debug msg="    Fetching Install Config..."
time="2022-02-13T14:46:22Z" level=debug msg="    Reusing previously-fetched Install Config"
time="2022-02-13T14:46:22Z" level=debug msg="  Generating Certificate (kube-apiserver-service-network-server)..."
time="2022-02-13T14:46:22Z" level=debug msg="  Fetching Certificate (kube-apiserver-service-network-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="  Reusing previously-fetched Certificate (kube-apiserver-service-network-signer)"
time="2022-02-13T14:46:22Z" level=debug msg="  Fetching Certificate (kube-apiserver-complete-server-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="  Reusing previously-fetched Certificate (kube-apiserver-complete-server-ca-bundle)"
time="2022-02-13T14:46:22Z" level=debug msg="  Fetching Certificate (kube-apiserver-complete-client-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Fetching Certificate (admin-kubeconfig-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Reusing previously-fetched Certificate (admin-kubeconfig-ca-bundle)"
time="2022-02-13T14:46:22Z" level=debug msg="    Fetching Certificate (kubelet-client-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Fetching Certificate (kubelet-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Generating Certificate (kubelet-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Generating Certificate (kubelet-client-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Fetching Certificate (kube-control-plane-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Fetching Certificate (kube-control-plane-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Generating Certificate (kube-control-plane-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Fetching Certificate (kube-apiserver-lb-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Reusing previously-fetched Certificate (kube-apiserver-lb-signer)"
time="2022-02-13T14:46:22Z" level=debug msg="      Fetching Certificate (kube-apiserver-localhost-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Reusing previously-fetched Certificate (kube-apiserver-localhost-signer)"
time="2022-02-13T14:46:22Z" level=debug msg="      Fetching Certificate (kube-apiserver-service-network-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Reusing previously-fetched Certificate (kube-apiserver-service-network-signer)"
time="2022-02-13T14:46:22Z" level=debug msg="    Generating Certificate (kube-control-plane-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Fetching Certificate (kube-apiserver-to-kubelet-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Fetching Certificate (kube-apiserver-to-kubelet-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Generating Certificate (kube-apiserver-to-kubelet-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Generating Certificate (kube-apiserver-to-kubelet-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Fetching Certificate (kubelet-bootstrap-kubeconfig-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Fetching Certificate (kubelet-bootstrap-kubeconfig-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="      Reusing previously-fetched Certificate (kubelet-bootstrap-kubeconfig-signer)"
time="2022-02-13T14:46:22Z" level=debug msg="    Generating Certificate (kubelet-bootstrap-kubeconfig-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="  Generating Certificate (kube-apiserver-complete-client-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="  Fetching Certificate (kube-apiserver-to-kubelet-ca-bundle)..."
time="2022-02-13T14:46:22Z" level=debug msg="  Reusing previously-fetched Certificate (kube-apiserver-to-kubelet-ca-bundle)"
time="2022-02-13T14:46:22Z" level=debug msg="  Fetching Certificate (kube-apiserver-to-kubelet-client)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Fetching Certificate (kube-apiserver-to-kubelet-signer)..."
time="2022-02-13T14:46:22Z" level=debug msg="    Reusing previously-fetched Certificate (kube-apiserver-to-kubelet-signer)"
time="2022-02-13T14:46:22Z" level=debug msg="  Generating Certificate (kube-apiserver-to-kubelet-client)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (kube-apiserver-to-kubelet-signer)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Reusing previously-fetched Certificate (kube-apiserver-to-kubelet-signer)"
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (kube-control-plane-ca-bundle)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Reusing previously-fetched Certificate (kube-control-plane-ca-bundle)"
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (kube-control-plane-kube-controller-manager-client)..."
time="2022-02-13T14:46:23Z" level=debug msg="    Fetching Certificate (kube-control-plane-signer)..."
time="2022-02-13T14:46:23Z" level=debug msg="    Reusing previously-fetched Certificate (kube-control-plane-signer)"
time="2022-02-13T14:46:23Z" level=debug msg="  Generating Certificate (kube-control-plane-kube-controller-manager-client)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (kube-control-plane-kube-scheduler-client)..."
time="2022-02-13T14:46:23Z" level=debug msg="    Fetching Certificate (kube-control-plane-signer)..."
time="2022-02-13T14:46:23Z" level=debug msg="    Reusing previously-fetched Certificate (kube-control-plane-signer)"
time="2022-02-13T14:46:23Z" level=debug msg="  Generating Certificate (kube-control-plane-kube-scheduler-client)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (kube-control-plane-signer)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Reusing previously-fetched Certificate (kube-control-plane-signer)"
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (kubelet-bootstrap-kubeconfig-ca-bundle)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Reusing previously-fetched Certificate (kubelet-bootstrap-kubeconfig-ca-bundle)"
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (kubelet-client-ca-bundle)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Reusing previously-fetched Certificate (kubelet-client-ca-bundle)"
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (kubelet-client)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Reusing previously-fetched Certificate (kubelet-client)"
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (kubelet-signer)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Reusing previously-fetched Certificate (kubelet-signer)"
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (kubelet-serving-ca-bundle)..."
time="2022-02-13T14:46:23Z" level=debug msg="    Fetching Certificate (kubelet-signer)..."
time="2022-02-13T14:46:23Z" level=debug msg="    Reusing previously-fetched Certificate (kubelet-signer)"
time="2022-02-13T14:46:23Z" level=debug msg="  Generating Certificate (kubelet-serving-ca-bundle)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Certificate (mcs)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Reusing previously-fetched Certificate (mcs)"
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Root CA..."
time="2022-02-13T14:46:23Z" level=debug msg="  Reusing previously-fetched Root CA"
time="2022-02-13T14:46:23Z" level=debug msg="  Fetching Key Pair (service-account.pub)..."
time="2022-02-13T14:46:23Z" level=debug msg="  Generating Key Pair (service-account.pub)..."
time="2022-02-13T14:46:24Z" level=debug msg="  Fetching Release Image Pull Spec..."
time="2022-02-13T14:46:24Z" level=debug msg="  Generating Release Image Pull Spec..."
time="2022-02-13T14:46:24Z" level=debug msg="Using internal constant for release image quay.io/openshift-release-dev/ocp-release@sha256:1c9c59adc3dc9db02691a87e70b5d92665d8faca0f56ac046eebc18145e75721"
time="2022-02-13T14:46:24Z" level=debug msg="  Fetching Image..."
time="2022-02-13T14:46:24Z" level=debug msg="  Reusing previously-fetched Image"
time="2022-02-13T14:46:24Z" level=debug msg="Generating Bootstrap Ignition Config..."
time="2022-02-13T14:46:24Z" level=info msg="Consuming Worker Machines from target directory"
time="2022-02-13T14:46:24Z" level=debug msg="Purging asset \"Worker Machines\" from disk"
time="2022-02-13T14:46:24Z" level=info msg="Consuming Master Machines from target directory"
time="2022-02-13T14:46:24Z" level=debug msg="Purging asset \"Master Machines\" from disk"
time="2022-02-13T14:46:24Z" level=info msg="Consuming Common Manifests from target directory"
time="2022-02-13T14:46:24Z" level=debug msg="Purging asset \"Common Manifests\" from disk"
time="2022-02-13T14:46:24Z" level=info msg="Consuming Openshift Manifests from target directory"
time="2022-02-13T14:46:24Z" level=debug msg="Purging asset \"Openshift Manifests\" from disk"
time="2022-02-13T14:46:24Z" level=info msg="Consuming OpenShift Install (Manifests) from target directory"
time="2022-02-13T14:46:24Z" level=debug msg="Purging asset \"OpenShift Install (Manifests)\" from disk"
time="2022-02-13T14:46:24Z" level=debug msg="Fetching Metadata..."
time="2022-02-13T14:46:24Z" level=debug msg="Loading Metadata..."
time="2022-02-13T14:46:24Z" level=debug msg="  Loading Cluster ID..."
time="2022-02-13T14:46:24Z" level=debug msg="  Loading Install Config..."
time="2022-02-13T14:46:24Z" level=debug msg="  Fetching Cluster ID..."
time="2022-02-13T14:46:24Z" level=debug msg="  Reusing previously-fetched Cluster ID"
time="2022-02-13T14:46:24Z" level=debug msg="  Fetching Install Config..."
time="2022-02-13T14:46:24Z" level=debug msg="  Reusing previously-fetched Install Config"
time="2022-02-13T14:46:24Z" level=debug msg="Generating Metadata..."
time="2022-02-13T14:46:24Z" level=info msg="Ignition-Configs created in: installer-files/temp and installer-files/temp/auth"
time="2022-02-13T18:22:31Z" level=debug msg="OpenShift Installer 4.6.28"
time="2022-02-13T18:22:31Z" level=debug msg="Built from commit c47fb1296122a601bc578b9251ba1fb3c7dd4fd1"
time="2022-02-13T18:22:31Z" level=debug msg="Loading Install Config..."
time="2022-02-13T18:22:31Z" level=debug msg="  Loading SSH Key..."
time="2022-02-13T18:22:31Z" level=debug msg="  Loading Base Domain..."
time="2022-02-13T18:22:31Z" level=debug msg="    Loading Platform..."
time="2022-02-13T18:22:31Z" level=debug msg="  Loading Cluster Name..."
time="2022-02-13T18:22:31Z" level=debug msg="    Loading Base Domain..."
time="2022-02-13T18:22:31Z" level=debug msg="    Loading Platform..."
time="2022-02-13T18:22:31Z" level=debug msg="  Loading Pull Secret..."
time="2022-02-13T18:22:31Z" level=debug msg="  Loading Platform..."
time="2022-02-13T18:22:31Z" level=debug msg="Using Install Config loaded from state file"
time="2022-02-13T18:22:31Z" level=info msg="Waiting up to 40m0s for the cluster at https://api.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx:6443 to initialize..."
time="2022-02-13T18:22:31Z" level=debug msg="Still waiting for the cluster to initialize: Some cluster operators are still updating: authentication, console, image-registry, ingress, kube-storage-version-migrator, monitoring, storage"
time="2022-02-13T18:23:45Z" level=debug msg="Still waiting for the cluster to initialize: Some cluster operators are still updating: authentication, console, image-registry, ingress, kube-storage-version-migrator, monitoring, storage"
time="2022-02-13T18:26:58Z" level=debug msg="Still waiting for the cluster to initialize: Some cluster operators are still updating: authentication, console, image-registry, ingress, kube-storage-version-migrator, monitoring, storage"
time="2022-02-13T18:30:10Z" level=debug msg="Still waiting for the cluster to initialize: Some cluster operators are still updating: authentication, console, image-registry, ingress, kube-storage-version-migrator, monitoring, storage"
time="2022-02-13T18:30:49Z" level=debug msg="OpenShift Installer 4.6.28"
time="2022-02-13T18:30:49Z" level=debug msg="Built from commit c47fb1296122a601bc578b9251ba1fb3c7dd4fd1"
time="2022-02-13T18:30:49Z" level=info msg="Waiting up to 20m0s for the Kubernetes API at https://api.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx:6443..."
time="2022-02-13T18:30:50Z" level=info msg="API v1.19.0+d856161 up"
time="2022-02-13T18:30:50Z" level=info msg="Waiting up to 30m0s for bootstrapping to complete..."
time="2022-02-13T18:30:50Z" level=debug msg="Bootstrap status: complete"
time="2022-02-13T18:30:50Z" level=info msg="It is now safe to remove the bootstrap resources"
time="2022-02-13T18:30:50Z" level=info msg="Time elapsed: 0s"

Please let me know if any other logs would help the investigation.
Thank you

@gfysaris
Copy link
Author

Hello @Praveenmail2him ,
After going through the code once again, can I please verify that the workers nodes deployment is supposed to be initiated from the master nodes right?
I am trying to analyse the master.ign but I can't find this. Am I missing something ?

@gfysaris
Copy link
Author

Also this might be helpful as well..

var allEvents = {
  "metadata": {
    
  },
  "items": [
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d3609d7794ddf5",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d3609d7794ddf5",
        "uid": "b33d83b6-72fe-4e59-84c6-66ff74f5b5ca",
        "resourceVersion": "771961",
        "creationTimestamp": "2022-02-13T15:01:58Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:37:49Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OAuthRouteCheck",
      "message": "route status does not have host address",
      "source": {
        "component": "cluster-authentication-operator"
      },
      "firstTimestamp": "2022-02-13T15:01:58Z",
      "lastTimestamp": "2022-02-15T10:37:49Z",
      "count": 19542,
      "type": "Warning",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d3609e15ff7692",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d3609e15ff7692",
        "uid": "ccadb97a-10fa-46b3-b6af-644e180a8147",
        "resourceVersion": "771556",
        "creationTimestamp": "2022-02-13T15:02:01Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:36:28Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Available message changed from \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\" to \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:02:01Z",
      "lastTimestamp": "2022-02-15T10:36:28Z",
      "count": 273,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d3609e5bf12b8a",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d3609e5bf12b8a",
        "uid": "2ef6e13c-962b-45c3-bcc2-566294286fc4",
        "resourceVersion": "771568",
        "creationTimestamp": "2022-02-13T15:02:02Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:36:29Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Available message changed from \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\" to \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: connect: connection refused\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:02:02Z",
      "lastTimestamp": "2022-02-15T10:36:29Z",
      "count": 182,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d360b12fb274c3",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d360b12fb274c3",
        "uid": "ad3c3fc8-407d-4f52-a6fe-35adb1478d1b",
        "resourceVersion": "771806",
        "creationTimestamp": "2022-02-13T15:03:23Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:37:23Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Available message changed from \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: connect: connection refused\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\" to \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:03:23Z",
      "lastTimestamp": "2022-02-15T10:37:23Z",
      "count": 1758,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d360b131526724",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d360b131526724",
        "uid": "09c21c32-1e46-4852-a7f7-94e2e17c3d1e",
        "resourceVersion": "771810",
        "creationTimestamp": "2022-02-13T15:03:23Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:37:23Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Degraded message changed from \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: connect: connection refused\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\" to \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:03:23Z",
      "lastTimestamp": "2022-02-15T10:37:23Z",
      "count": 1782,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d360b29b2909d9",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d360b29b2909d9",
        "uid": "d2f0c0f0-5d3a-4ab3-9a7c-a793aaf33a7a",
        "resourceVersion": "771851",
        "creationTimestamp": "2022-02-13T15:03:29Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:37:29Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Available message changed from \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\" to \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: connect: connection refused\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:03:29Z",
      "lastTimestamp": "2022-02-15T10:37:29Z",
      "count": 1761,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d360b29d198e8b",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d360b29d198e8b",
        "uid": "b479362a-8e69-4809-aa35-cdc0de6ad006",
        "resourceVersion": "771855",
        "creationTimestamp": "2022-02-13T15:03:29Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:37:29Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Degraded message changed from \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\" to \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: connect: connection refused\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:03:29Z",
      "lastTimestamp": "2022-02-15T10:37:29Z",
      "count": 1793,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d360f7096c6932",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d360f7096c6932",
        "uid": "e224f582-c5c5-431b-9112-c92b1e68e3eb",
        "resourceVersion": "769126",
        "creationTimestamp": "2022-02-13T15:08:23Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:27:53Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Available message changed from \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: connect: connection refused\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\" to \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:08:23Z",
      "lastTimestamp": "2022-02-15T10:27:53Z",
      "count": 179,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d360f70b0ca70d",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d360f70b0ca70d",
        "uid": "e37b077e-ee69-4260-9474-3584460b39fb",
        "resourceVersion": "769130",
        "creationTimestamp": "2022-02-13T15:08:23Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:27:53Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Degraded message changed from \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: connect: connection refused\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\" to \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:08:23Z",
      "lastTimestamp": "2022-02-15T10:27:53Z",
      "count": 184,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d360f835bcd3f0",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d360f835bcd3f0",
        "uid": "b93cc204-5db5-4900-af1a-e1ffd40108a3",
        "resourceVersion": "769625",
        "creationTimestamp": "2022-02-13T15:08:28Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:29:26Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Available message changed from \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\" to \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:08:28Z",
      "lastTimestamp": "2022-02-15T10:29:26Z",
      "count": 273,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d360f8379d8fff",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d360f8379d8fff",
        "uid": "4833c4ad-b8a0-4897-a9e1-23aea40de8a7",
        "resourceVersion": "769629",
        "creationTimestamp": "2022-02-13T15:08:28Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:29:26Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Degraded message changed from \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\" to \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:08:28Z",
      "lastTimestamp": "2022-02-15T10:29:26Z",
      "count": 277,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d3611428abd14b",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d3611428abd14b",
        "uid": "3e356ccf-fea2-47f5-b6f4-44d6b95ef6f0",
        "resourceVersion": "771560",
        "creationTimestamp": "2022-02-13T15:10:28Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:36:28Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Degraded message changed from \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\" to \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:10:28Z",
      "lastTimestamp": "2022-02-15T10:36:28Z",
      "count": 280,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d36114688381cc",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d36114688381cc",
        "uid": "058729ef-e229-4ef9-9f4c-1c79ba3ce7de",
        "resourceVersion": "771571",
        "creationTimestamp": "2022-02-13T15:10:29Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:36:29Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Degraded message changed from \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\" to \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: connect: connection refused\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T15:10:29Z",
      "lastTimestamp": "2022-02-15T10:36:29Z",
      "count": 181,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d366b1e1ec87c7",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d366b1e1ec87c7",
        "uid": "5a76bb7f-f575-4cbb-83bb-35d2d7a234cd",
        "resourceVersion": "747055",
        "creationTimestamp": "2022-02-15T06:22:28Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T09:11:23Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Degraded message changed from \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\" to \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\",Available message changed from \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\" to \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T16:53:23Z",
      "lastTimestamp": "2022-02-15T09:11:23Z",
      "count": 12,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "authentication-operator.16d37d4ad5a5e106",
        "namespace": "openshift-authentication-operator",
        "selfLink": "/api/v1/namespaces/openshift-authentication-operator/events/authentication-operator.16d37d4ad5a5e106",
        "uid": "456f0942-cb58-463d-a093-b97f45609fba",
        "resourceVersion": "767713",
        "creationTimestamp": "2022-02-15T10:22:59Z",
        "managedFields": [
          {
            "manager": "authentication-operator",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:22:59Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Deployment",
        "namespace": "openshift-authentication-operator",
        "name": "authentication-operator",
        "uid": "d8fadf77-52e9-4bcc-b017-e87b1046b659",
        "apiVersion": "apps/v1"
      },
      "reason": "OperatorStatusChanged",
      "message": "Status for clusteroperator/authentication changed: Degraded message changed from \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\" to \"OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\n+OAuthServiceCheckEndpointAccessibleControllerDegraded: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: connect: connection refused\n+IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\n+OAuthRouteCheckEndpointAccessibleControllerDegraded: route status does not have host address\n+OAuthVersionDeploymentDegraded: Unable to get OAuth server deployment: deployment.apps \\\"oauth-openshift\\\" not found\n+WellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\n+OAuthServerDeploymentDegraded: deployments.apps \\\"oauth-openshift\\\" not found\n+OAuthServerRouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\n+RouteDegraded: Route is not available at canonical host oauth-openshift.apps.ocp4.001.external.ocp.xxx.demos.aws.xxx.xxx: route status ingress is empty\",Available message changed from \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\" to \"ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\n+OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints\n+OAuthServiceCheckEndpointAccessibleControllerAvailable: Get \\\"https://192.168.128.140:443/healthz\\\": dial tcp 192.168.128.140:443: connect: connection refused\n+WellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \\\"oauth-openshift\\\" not found (check authentication operator, it is supposed to create this)\"",
      "source": {
        "component": "cluster-authentication-operator-status-controller-statussyncer_authentication"
      },
      "firstTimestamp": "2022-02-13T23:47:29Z",
      "lastTimestamp": "2022-02-15T10:22:59Z",
      "count": 42,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "aws-ebs-csi-driver-controller-64d45cfb45-jd2kk.16d36045d0fab10b",
        "namespace": "openshift-cluster-csi-drivers",
        "selfLink": "/api/v1/namespaces/openshift-cluster-csi-drivers/events/aws-ebs-csi-driver-controller-64d45cfb45-jd2kk.16d36045d0fab10b",
        "uid": "7fcafcc1-f878-40ab-a21b-19d42011349e",
        "resourceVersion": "749770",
        "creationTimestamp": "2022-02-13T14:55:41Z",
        "managedFields": [
          {
            "manager": "kubelet",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T09:20:40Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:fieldPath":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{},"f:host":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-cluster-csi-drivers",
        "name": "aws-ebs-csi-driver-controller-64d45cfb45-jd2kk",
        "uid": "731cd60b-b291-4950-a006-a139ea109a37",
        "apiVersion": "v1",
        "resourceVersion": "5236",
        "fieldPath": "spec.containers{csi-driver}"
      },
      "reason": "Failed",
      "message": "Error: secret \"ebs-cloud-credentials\" not found",
      "source": {
        "component": "kubelet",
        "host": "ip-10-0-142-47.eu-west-1.compute.internal"
      },
      "firstTimestamp": "2022-02-13T14:55:41Z",
      "lastTimestamp": "2022-02-15T09:20:40Z",
      "count": 11752,
      "type": "Warning",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "aws-ebs-csi-driver-controller-64d45cfb45-jd2kk.16d3604aab58635d",
        "namespace": "openshift-cluster-csi-drivers",
        "selfLink": "/api/v1/namespaces/openshift-cluster-csi-drivers/events/aws-ebs-csi-driver-controller-64d45cfb45-jd2kk.16d3604aab58635d",
        "uid": "98f41a89-5486-45b7-a0ca-c1961cd49fa8",
        "resourceVersion": "771378",
        "creationTimestamp": "2022-02-13T14:56:02Z",
        "managedFields": [
          {
            "manager": "kubelet",
            "operation": "Update",
            "apiVersion": "v1",
            "time": "2022-02-15T10:35:48Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:count":{},"f:firstTimestamp":{},"f:involvedObject":{"f:apiVersion":{},"f:fieldPath":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:lastTimestamp":{},"f:message":{},"f:reason":{},"f:source":{"f:component":{},"f:host":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-cluster-csi-drivers",
        "name": "aws-ebs-csi-driver-controller-64d45cfb45-jd2kk",
        "uid": "731cd60b-b291-4950-a006-a139ea109a37",
        "apiVersion": "v1",
        "resourceVersion": "5236",
        "fieldPath": "spec.containers{csi-driver}"
      },
      "reason": "Pulled",
      "message": "Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dd4995e89bf036c294e16c5b8e26ae189e5e77ff2f3e3f48477e5f848a70a65\" already present on machine",
      "source": {
        "component": "kubelet",
        "host": "ip-10-0-142-47.eu-west-1.compute.internal"
      },
      "firstTimestamp": "2022-02-13T14:56:02Z",
      "lastTimestamp": "2022-02-15T10:35:48Z",
      "count": 12098,
      "type": "Normal",
      "eventTime": null,
      "reportingComponent": "",
      "reportingInstance": ""
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "image-pruner-1644796800-4xczj.16d37dfbc723bcaa",
        "namespace": "openshift-image-registry",
        "selfLink": "/api/v1/namespaces/openshift-image-registry/events/image-pruner-1644796800-4xczj.16d37dfbc723bcaa",
        "uid": "86628a71-717f-4982-8182-47f71b0c0279",
        "resourceVersion": "770955",
        "creationTimestamp": "2022-02-14T00:00:09Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-image-registry",
        "name": "image-pruner-1644796800-4xczj",
        "uid": "f7982b66-a84e-4f4e-b102-ff444b4b4a5b",
        "apiVersion": "v1",
        "resourceVersion": "172298"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-14T00:00:09.251630Z",
      "series": {
        "count": 1405,
        "lastObservedTime": "2022-02-15T10:34:12.588827Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "router-default-54db5f9857-cgskg.16d360ca87073784",
        "namespace": "openshift-ingress",
        "selfLink": "/api/v1/namespaces/openshift-ingress/events/router-default-54db5f9857-cgskg.16d360ca87073784",
        "uid": "81ff0986-04e5-4a39-a4d6-5b43b4e7e962",
        "resourceVersion": "770947",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-ingress",
        "name": "router-default-54db5f9857-cgskg",
        "uid": "00fa2407-8f72-4bda-966b-b065ee8d50a6",
        "apiVersion": "v1",
        "resourceVersion": "7520"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) didn't match node selector.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.885417Z",
      "series": {
        "count": 1782,
        "lastObservedTime": "2022-02-15T10:34:12.569224Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "router-default-54db5f9857-tb6rk.16d360ca87994979",
        "namespace": "openshift-ingress",
        "selfLink": "/api/v1/namespaces/openshift-ingress/events/router-default-54db5f9857-tb6rk.16d360ca87994979",
        "uid": "1ed5ca34-692d-4e60-96d5-0a0f3fb88f92",
        "resourceVersion": "770950",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-ingress",
        "name": "router-default-54db5f9857-tb6rk",
        "uid": "ea7cb320-0801-47b2-9f6b-2ef1835ad0a9",
        "apiVersion": "v1",
        "resourceVersion": "7516"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) didn't match node selector.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.894990Z",
      "series": {
        "count": 1782,
        "lastObservedTime": "2022-02-15T10:34:12.575699Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "migrator-5b6c8bc7fc-jc5wx.16d360ca88b1eab3",
        "namespace": "openshift-kube-storage-version-migrator",
        "selfLink": "/api/v1/namespaces/openshift-kube-storage-version-migrator/events/migrator-5b6c8bc7fc-jc5wx.16d360ca88b1eab3",
        "uid": "13924101-20da-434a-be97-a342199b3c81",
        "resourceVersion": "770941",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:17Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-kube-storage-version-migrator",
        "name": "migrator-5b6c8bc7fc-jc5wx",
        "uid": "9e0a124d-44f3-4797-8ec3-b6fd4cef4a1f",
        "apiVersion": "v1",
        "resourceVersion": "4532"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.913382Z",
      "series": {
        "count": 1781,
        "lastObservedTime": "2022-02-15T10:34:12.624566Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "certified-operators-8z6rq.16d360eb3f601cb2",
        "namespace": "openshift-marketplace",
        "selfLink": "/api/v1/namespaces/openshift-marketplace/events/certified-operators-8z6rq.16d360eb3f601cb2",
        "uid": "34324d8f-9660-4b2b-ab3e-9d6c209b06a2",
        "resourceVersion": "770944",
        "creationTimestamp": "2022-02-13T15:07:32Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-marketplace",
        "name": "certified-operators-8z6rq",
        "uid": "7b48ac0d-0a6d-4fe2-ad28-7178f66b98f3",
        "apiVersion": "v1",
        "resourceVersion": "19133"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:07:32.417203Z",
      "series": {
        "count": 1773,
        "lastObservedTime": "2022-02-15T10:34:12.592358Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "certified-operators-rf6ts.16d360ca88013768",
        "namespace": "openshift-marketplace",
        "selfLink": "/api/v1/namespaces/openshift-marketplace/events/certified-operators-rf6ts.16d360ca88013768",
        "uid": "26d1e38b-7b45-4c62-853e-36dd89891c83",
        "resourceVersion": "770939",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:17Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-marketplace",
        "name": "certified-operators-rf6ts",
        "uid": "93791f66-1dc6-430c-86d0-90a88436f725",
        "apiVersion": "v1",
        "resourceVersion": "7310"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.901801Z",
      "series": {
        "count": 1781,
        "lastObservedTime": "2022-02-15T10:34:12.613498Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "community-operators-nmcm6.16d360dde5abbbcc",
        "namespace": "openshift-marketplace",
        "selfLink": "/api/v1/namespaces/openshift-marketplace/events/community-operators-nmcm6.16d360dde5abbbcc",
        "uid": "6afe7dcd-a7f3-477e-ac7a-3b04bd039ff8",
        "resourceVersion": "770943",
        "creationTimestamp": "2022-02-13T15:06:35Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-marketplace",
        "name": "community-operators-nmcm6",
        "uid": "6c7818c3-c0af-4e44-91ed-0800ff5d6bba",
        "apiVersion": "v1",
        "resourceVersion": "18681"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:06:35.077636Z",
      "series": {
        "count": 1775,
        "lastObservedTime": "2022-02-15T10:34:12.627871Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "community-operators-rv4wg.16d360ca88761693",
        "namespace": "openshift-marketplace",
        "selfLink": "/api/v1/namespaces/openshift-marketplace/events/community-operators-rv4wg.16d360ca88761693",
        "uid": "d1cebb4d-25a2-4b28-9891-c7e04aea7b27",
        "resourceVersion": "770951",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-marketplace",
        "name": "community-operators-rv4wg",
        "uid": "486f73ca-4ca9-4d3d-a3d3-ac930d515d7f",
        "apiVersion": "v1",
        "resourceVersion": "7324"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.909461Z",
      "series": {
        "count": 1781,
        "lastObservedTime": "2022-02-15T10:34:12.621082Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "redhat-marketplace-8zf8g.16d360ca883c60ef",
        "namespace": "openshift-marketplace",
        "selfLink": "/api/v1/namespaces/openshift-marketplace/events/redhat-marketplace-8zf8g.16d360ca883c60ef",
        "uid": "6ec8f06e-34be-4687-ab1b-873f44c7b2de",
        "resourceVersion": "770952",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-marketplace",
        "name": "redhat-marketplace-8zf8g",
        "uid": "cdae25ab-502e-4499-9a49-56cc0a4810c6",
        "apiVersion": "v1",
        "resourceVersion": "7429"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.905679Z",
      "series": {
        "count": 1781,
        "lastObservedTime": "2022-02-15T10:34:12.617618Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "redhat-marketplace-nwqcf.16d360dc2a517159",
        "namespace": "openshift-marketplace",
        "selfLink": "/api/v1/namespaces/openshift-marketplace/events/redhat-marketplace-nwqcf.16d360dc2a517159",
        "uid": "eb635349-b92a-48c6-ab69-c8aa36b5c112",
        "resourceVersion": "770942",
        "creationTimestamp": "2022-02-13T15:06:27Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:17Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-marketplace",
        "name": "redhat-marketplace-nwqcf",
        "uid": "3c0a0bd6-e1cb-4e07-800f-9f2edcc8a901",
        "apiVersion": "v1",
        "resourceVersion": "18636"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:06:27.639411Z",
      "series": {
        "count": 1775,
        "lastObservedTime": "2022-02-15T10:34:12.595351Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "redhat-operators-2lcdx.16d360e2613969f2",
        "namespace": "openshift-marketplace",
        "selfLink": "/api/v1/namespaces/openshift-marketplace/events/redhat-operators-2lcdx.16d360e2613969f2",
        "uid": "84d695ca-eab4-4274-992f-1b73c9741796",
        "resourceVersion": "770953",
        "creationTimestamp": "2022-02-13T15:06:54Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-marketplace",
        "name": "redhat-operators-2lcdx",
        "uid": "e58f5091-9396-422a-a22e-3b669700cce8",
        "apiVersion": "v1",
        "resourceVersion": "18791"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:06:54.330387Z",
      "series": {
        "count": 1775,
        "lastObservedTime": "2022-02-15T10:34:12.631708Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "redhat-operators-c7rt2.16d360ca87c734f8",
        "namespace": "openshift-marketplace",
        "selfLink": "/api/v1/namespaces/openshift-marketplace/events/redhat-operators-c7rt2.16d360ca87c734f8",
        "uid": "be2b3999-2d41-4315-9aec-ba62f6a0dddd",
        "resourceVersion": "770948",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-marketplace",
        "name": "redhat-operators-c7rt2",
        "uid": "1c7e206b-633d-4ef4-8272-89d2225f33ca",
        "apiVersion": "v1",
        "resourceVersion": "7443"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.898000Z",
      "series": {
        "count": 1781,
        "lastObservedTime": "2022-02-15T10:34:12.610253Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "kube-state-metrics-7bb7644f78-pc2k9.16d360ca868d0639",
        "namespace": "openshift-monitoring",
        "selfLink": "/api/v1/namespaces/openshift-monitoring/events/kube-state-metrics-7bb7644f78-pc2k9.16d360ca868d0639",
        "uid": "16f1ce85-95c6-4429-ba48-c7f6253a8d77",
        "resourceVersion": "770949",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-monitoring",
        "name": "kube-state-metrics-7bb7644f78-pc2k9",
        "uid": "176bb28b-4078-45ba-86ba-e300a74d64ec",
        "apiVersion": "v1",
        "resourceVersion": "4032"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.877410Z",
      "series": {
        "count": 1781,
        "lastObservedTime": "2022-02-15T10:34:12.566038Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "openshift-state-metrics-7b9f57d96b-mm9zm.16d360ca86013d12",
        "namespace": "openshift-monitoring",
        "selfLink": "/api/v1/namespaces/openshift-monitoring/events/openshift-state-metrics-7b9f57d96b-mm9zm.16d360ca86013d12",
        "uid": "6e1a1752-f047-4052-ac3f-e1ab1f42d966",
        "resourceVersion": "770956",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-monitoring",
        "name": "openshift-state-metrics-7b9f57d96b-mm9zm",
        "uid": "07c39e02-7535-46f3-a720-b60da59574a0",
        "apiVersion": "v1",
        "resourceVersion": "4047"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.868244Z",
      "series": {
        "count": 1782,
        "lastObservedTime": "2022-02-15T10:34:12.585895Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "prometheus-adapter-547485458b-h9nz4.16d360ca86388543",
        "namespace": "openshift-monitoring",
        "selfLink": "/api/v1/namespaces/openshift-monitoring/events/prometheus-adapter-547485458b-h9nz4.16d360ca86388543",
        "uid": "2845ad06-875e-4f8b-a98f-4ae88a0b3860",
        "resourceVersion": "770957",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-monitoring",
        "name": "prometheus-adapter-547485458b-h9nz4",
        "uid": "d6f80d5e-d2aa-42ff-8040-8f07726450c6",
        "apiVersion": "v1",
        "resourceVersion": "15693"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.871871Z",
      "series": {
        "count": 1781,
        "lastObservedTime": "2022-02-15T10:34:12.562055Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "prometheus-adapter-5d6dbb96d9-6x7qf.16d39ee5fda31f95",
        "namespace": "openshift-monitoring",
        "selfLink": "/api/v1/namespaces/openshift-monitoring/events/prometheus-adapter-5d6dbb96d9-6x7qf.16d39ee5fda31f95",
        "uid": "d0c47f34-aaee-46f9-a31a-deb0ad7fee35",
        "resourceVersion": "770945",
        "creationTimestamp": "2022-02-14T10:03:19Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-monitoring",
        "name": "prometheus-adapter-5d6dbb96d9-6x7qf",
        "uid": "f39d1dc3-e6f6-42f2-b4d8-5bbcb4590935",
        "apiVersion": "v1",
        "resourceVersion": "346591"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-14T10:03:19.560385Z",
      "series": {
        "count": 995,
        "lastObservedTime": "2022-02-15T10:34:12.572428Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "prometheus-adapter-7ffcf4856b-28qq2.16d3aead179e930d",
        "namespace": "openshift-monitoring",
        "selfLink": "/api/v1/namespaces/openshift-monitoring/events/prometheus-adapter-7ffcf4856b-28qq2.16d3aead179e930d",
        "uid": "49a8174b-7bfe-472e-91f8-a710f0360514",
        "resourceVersion": "770940",
        "creationTimestamp": "2022-02-14T14:52:27Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:17Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-monitoring",
        "name": "prometheus-adapter-7ffcf4856b-28qq2",
        "uid": "36aa3478-b4f5-496c-91a7-5736eab9b6fe",
        "apiVersion": "v1",
        "resourceVersion": "430472"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-14T14:52:27.369203Z",
      "series": {
        "count": 793,
        "lastObservedTime": "2022-02-15T10:34:12.582741Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    },
    {
      "kind": "Event",
      "apiVersion": "v1",
      "metadata": {
        "name": "telemeter-client-57c5477dd-gcbhd.16d360ca87396f3c",
        "namespace": "openshift-monitoring",
        "selfLink": "/api/v1/namespaces/openshift-monitoring/events/telemeter-client-57c5477dd-gcbhd.16d360ca87396f3c",
        "uid": "7d9826d0-8c9b-44dd-b8d5-1dcef09b69ce",
        "resourceVersion": "770954",
        "creationTimestamp": "2022-02-13T15:05:11Z",
        "managedFields": [
          {
            "manager": "kube-scheduler",
            "operation": "Update",
            "apiVersion": "events.k8s.io/v1",
            "time": "2022-02-15T10:34:18Z",
            "fieldsType": "FieldsV1",
            "fieldsV1": {"f:action":{},"f:eventTime":{},"f:note":{},"f:reason":{},"f:regarding":{"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:reportingController":{},"f:reportingInstance":{},"f:series":{".":{},"f:count":{},"f:lastObservedTime":{}},"f:type":{}}
          }
        ]
      },
      "involvedObject": {
        "kind": "Pod",
        "namespace": "openshift-monitoring",
        "name": "telemeter-client-57c5477dd-gcbhd",
        "uid": "db399b0d-5229-488f-abb3-82f6bb80a026",
        "apiVersion": "v1",
        "resourceVersion": "4180"
      },
      "reason": "FailedScheduling",
      "message": "0/3 nodes are available: 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.",
      "source": {
        
      },
      "firstTimestamp": null,
      "lastTimestamp": null,
      "type": "Warning",
      "eventTime": "2022-02-13T15:05:11.888708Z",
      "series": {
        "count": 1784,
        "lastObservedTime": "2022-02-15T10:34:12.579641Z"
      },
      "action": "Scheduling",
      "reportingComponent": "default-scheduler",
      "reportingInstance": "default-scheduler-ip-10-0-186-217"
    }
  ]
}

@danilouchoa
Copy link

I am also having the same problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants