Skip to content

jbiniek/RancherPOC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

  • POC Requirements Rancher

    Architecture

    • 1 highly available Rancher Management Cluster
    • 1 or more highly available Downstream Workload Cluster
    • ...

    Architecture

    Summary

    The following things should be prepared before the POC starts:

    • Hardware for Rancher Management Server and Downstream clusters, including network setup, OS, SSH access and container runtime
    • Management workstation or local workstation with access to the VMs and all necessary CLI tools
    • TLS certificate for Rancher
    • DNS entry for Rancher
    • Loadbalancer in front of Rancher
    • TLS certificates for test workloads
    • DNS entries for test workloads
    • Loadbalancer in front of each downstream cluster
    • Rancher Helm Chart is accessible rom the management workstation
    • All necessary Rancher and Kubernetes Docker Images are either accessible by all servers directly, or through a mirror/proxy, or available in an internal Docker registry

    The details are described below.

    Prerequisites

    Hardware

    Amount Compute Type CPU RAM Disk Capacity Role
    3 VM 2 vCPU 8GB 35GB >= 1000 IOPS Rancher Management nodes
    3-n VM 2 vCPU 8GB 35GB >= 1000 IOPS Downstream workload cluster nodes. Additional worker nodes could also be larger.

    Operating System Requirements

    • OS
      • Ubuntu 16.04, 18.04, 20.04

      • RHEL/CentOS 7.5, 7.6, 7.7, 7.8

      • Oracle Linux 7.6, 7.7

      • SLES 12 SP5, 15 SP2, 15SP3

    • SSH access to all virtual machines

    CLI Tools and binaries

    The following CLI tools and binaries are needed on a management workstation to setup and manage Rancher and Kubernetes:

  • kubectl

  • helm

Networking

  • Application cluster nodes should be connected to the same VLAN and have unrestricted connectivity within the VLAN. Network configuration in general must satisfy the following port/communication requirements:

  • It is recommended to disable firewalld, for details see here

    • Rancher Management hosts can be connected to same or separate VLAN. In the latter case, the network loadbalancer endpoint must be reachable from the other VLAN.

    Other requirements

Rancher Management Cluster

  • TLS/SSL certificate for the Rancher management UI & API

  • Layer 4 TCP load balancers in front of the Rancher Management cluster nodes proxying to

    • the cluster's ingress controller (Ports 80 and 443)

      • the Kubernetes API Server (Port 6443)

      The LB could be a software LB (nginx, haproxy, ...), a hardware LB (F5 Big IP, ...) or keepalived

  • DNS entry for the Rancher management console endpoint pointing to the LB in front of the ingress controller

Each Downstream Cluster

  • DNS Alias (or wildcard DNS, e.g. *.prod.cluster.acme.com) for the application deployed in the Downstream Cluster

  • TLS/SSL certificates for applications that should be exposed

  • Layer 4 TCP load balancers in front of each cluster's worker nodes proxying to

    • the cluster's ingress controller (Ports 80 and 443)

      • the Kubernetes API Server (Port 6443)

      The LB could be a software LB (nginx, haproxy, ...), a hardware LB (F5 Big IP, ...) or keepalived

  • DNS entries (or wildcard DNS entries) for applications that should be exposed pointing to the LBs in front of the ingress controllers

Authentication

  • For Active Directory integration, a service account should be available that has permissions to lookup users and groups and perform binds for authentication

Airgapped setups

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published