Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API loadbalancer type Internal does not work on openstack #16866

Open
networkhell opened this issue Sep 27, 2024 · 1 comment
Open

API loadbalancer type Internal does not work on openstack #16866

networkhell opened this issue Sep 27, 2024 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@networkhell
Copy link

/kind bug

1. What kops version are you running?

kops version
Client version: 1.30.1

2. What Kubernetes version are you running?

kubectl version
Client Version: v1.31.1
Kustomize Version: v5.4.2
Server Version: v1.30.5

3. What cloud provider are you using?
Openstack

4. What commands did you run? What is the simplest way to reproduce this issue?

kops create -f kops-test-fh.k8s.local.yaml --state swift://kops
kops create secret --name kops-test-fh.k8s.local sshpublickey admin -i ~/.ssh/id_kops.pub --state swift://kops
kops update cluster --name kops-test-fh.k8s.local --yes --state swift://kops

5. What happened after the commands executed?

kubectl cluster-info
Kubernetes control plane is running at https://31.172.*.*
CoreDNS is running at https://31.172.*.*/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
KubeDNSUpstream is running at https://31.172.*.*/api/v1/namespaces/kube-system/services/kube-dns-upstream:dns/proxy

6. What did you expect to happen?
As the api spec looks like:

spec:
  api:
    loadBalancer:
      type: Internal
      useForInternalApi: true

I expect kops to not allocate a floating ip for the api loadbalancer. Also kops should configure kubernetes to use the internal loadbalancer IP as API endpoint.

7. Please provide your cluster manifest.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2024-09-23T09:47:20Z"
  name: kops-test-fh.k8s.local
spec:
  addons:
  - manifest: swift://kops-addons/addon.yaml
  api:
    loadBalancer:
      type: Internal
      useForInternalApi: true
  authorization:
    rbac: {}
  certManager:
    enabled: true
  channel: stable
  cloudConfig:
    openstack:
      blockStorage:
        bs-version: v3
        clusterName: kops-test-fh.k8s.local
        createStorageClass: false
        csiTopologySupport: true
        ignore-volume-az: false
      loadbalancer:
        floatingNetwork: external
        floatingNetworkID: ***
        method: ROUND_ROBIN
        provider: amphora
        useOctavia: true
      monitor:
        delay: 15s
        maxRetries: 3
        timeout: 10s
      router:
        externalNetwork: external
  cloudControllerManager:
    clusterName: kops-test-fh.k8s.local
  cloudProvider: openstack
  configBase: swift://kops/kops-test-fh.k8s.local
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: control-plane-muc5-a
      name: a
      volumeType: rbd_fast
    - instanceGroup: control-plane-muc5-b
      name: b
      volumeType: rbd_fast
    - instanceGroup: control-plane-muc5-d
      name: c
      volumeType: rbd_fast
    manager:
      backupInterval: 24h0m0s
      backupRetentionDays: 90
      env:
      - name: ETCD_LISTEN_METRICS_URLS
        value: http://0.0.0.0:8081
      - name: ETCD_METRICS
        value: basic
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: control-plane-muc5-a
      name: a
      volumeType: rbd_fast
    - instanceGroup: control-plane-muc5-b
      name: b
      volumeType: rbd_fast
    - instanceGroup: control-plane-muc5-d
      name: c
      volumeType: rbd_fast
    manager:
      backupRetentionDays: 90
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    anonymousAuth: false
    tlsCipherSuites:
    - TLS_AES_128_GCM_SHA256
    - TLS_AES_256_GCM_SHA384
    - TLS_CHACHA20_POLY1305_SHA256
    - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
    - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
    - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
    - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
    - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
    tlsMinVersion: VersionTLS13
  kubeDNS:
    nodeLocalDNS:
      cpuRequest: 25m
      enabled: true
      memoryRequest: 5Mi
    provider: CoreDNS
  kubeProxy:
    enabled: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  - ::/0
  kubernetesVersion: 1.30.5
  metricsServer:
    enabled: true
    insecure: false
  networkCIDR: 10.42.42.0/24
  networking:
    calico:
      bpfEnabled: true
      encapsulationMode: vxlan
      wireguardEnabled: true
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  - ::/0
  subnets:
  - cidr: 10.42.42.64/26
    name: muc5-a
    type: Private
    zone: muc5-a
  - cidr: 10.42.42.128/26
    name: muc5-b
    type: Private
    zone: muc5-b
  - cidr: 10.42.42.192/26
    name: muc5-d
    type: Private
    zone: muc5-d
  - cidr: 10.42.42.0/29
    name: utility-muc5-a
    type: Utility
    zone: muc5-a
  - cidr: 10.42.42.8/29
    name: utility-muc5-b
    type: Utility
    zone: muc5-b
  - cidr: 10.42.42.16/29
    name: utility-muc5-d
    type: Utility
    zone: muc5-d
  topology:
    dns:
      type: None

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2024-09-27T08:44:39Z"
  labels:
    kops.k8s.io/cluster: kops-test-fh.k8s.local
  name: bastions
spec:
  associatePublicIp: true
  image: Debian 12
  machineType: SCS-2V-8-20s
  maxSize: 1
  minSize: 1
  role: Bastion
  subnets:
  - utility-muc5-a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2024-09-23T09:47:20Z"
  labels:
    kops.k8s.io/cluster: kops-test-fh.k8s.local
  name: control-plane-muc5-a
spec:
  image: flatcar
  machineType: SCS-2V-8-20s
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - muc5-a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2024-09-23T09:47:20Z"
  labels:
    kops.k8s.io/cluster: kops-test-fh.k8s.local
  name: control-plane-muc5-b
spec:
  image: flatcar
  machineType: SCS-2V-8-20s
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - muc5-b

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2024-09-23T09:47:20Z"
  labels:
    kops.k8s.io/cluster: kops-test-fh.k8s.local
  name: control-plane-muc5-d
spec:
  image: flatcar
  machineType: SCS-2V-8-20s
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - muc5-d

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2024-09-23T09:47:21Z"
  labels:
    kops.k8s.io/cluster: kops-test-fh.k8s.local
  name: nodes-muc5-a
spec:
  image: flatcar
  machineType: SCS-2V-8-20s
  maxSize: 1
  minSize: 1
  role: Node
  subnets:
  - muc5-a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2024-09-23T09:47:21Z"
  labels:
    kops.k8s.io/cluster: kops-test-fh.k8s.local
  name: nodes-muc5-b
spec:
  image: flatcar
  machineType: SCS-2V-8-20s
  maxSize: 1
  minSize: 1
  role: Node
  subnets:
  - muc5-b

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2024-09-23T09:47:21Z"
  labels:
    kops.k8s.io/cluster: kops-test-fh.k8s.local
  name: nodes-muc5-d
spec:
  image: flatcar
  machineType: SCS-2V-8-20s
  maxSize: 1
  minSize: 1
  role: Node
  subnets:
  - muc5-d

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

I will provide this if it is useful for troubleshooting. Because it will take some time to redact the massive amount of log outputs.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 27, 2024
@networkhell
Copy link
Author

I did some more research and it seems kind of work when using public DNS - but then kops does not set the DNS records, I have to set it manually:

  topology:
    dns:
      type: Public

The problem ist that the DNS record for api.internal. is never created. If I create it by hand then all nodes join the cluster.
So am I missing some essential config node or is it a bug?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants