kubernetes is Container cluster management tool.
Amazon EKS is Kubernetes cluster management service.
Elastic Kubernetes Service(EKS) give fault tolerance(FT) means if something fail, Kubernetes master automatic launch Container.
EKS give seamless facility.
EKS also have multi master node setup.
In Master node of Kubernetes there are differents program run (Kube-schedular, KubeAPI, etcd) which control the kubernetes cluster nodes is termed as Control plane.
Amazon EKS the master node is fully managed by AWS Cloud.
The Worker node is not fully managed by AWS.
- Note:
We can create EKS cluster :-
- webUI
- terraForm
- eksctl
- SetUp:
Use "eksctl" to create eks cluster we need following things:
- IAM user (Create IAM user to access AWS EKS)
- AWS cli (on local laptop for aws Authentication)
- eksctl tool on local machine(Create Eks cluster).
- kubectl tool on local machine(Do work inside cluster).
- eksctl create cluster (Create cluster from local laptop).
- AWS Console check(Cluster is created checking on aws console-->>EKS)
Go to aws console and create IAM user which is used for Authentication:
Now attach policy and create new user:
Now click on created user:
Now click "security credentials" and creare access key:
Retrieve access key:(Copy key)
Search on google -->> "AWS CLI install window" -->> download AWS CLI for window (64-bit)
Command for checking AWS CLI work on prompt/ GitBash:
aws --version
Paste access key of IAM user:
aws configure
This AWS CLI tool help us to connect with AWS and use AWS Services from laptop/Local machine.
eksctl is third party tool to manage EKS.
Search on browser "eksctl" open link and right side of link give GitHub repo link click ,
Now on GitHub there is option -->> "Release" click: (Download:- eksctl window amd64.zip)
-
Note:
After Download eksctl tool we extract because we download zip file .
After extract eksctl tool we need to add path of eksctl tool to system -->>"Edit environment variable"
Go to "User variable for__" --> click "path" -->> Click "Edit" and here Add New path for extracted eksctl tool location:
Now on command prompt/ GitBash we can check by using command :
eksctl
eksctl version
- Note:
"eksctl" command for only create and delete cluster but not for doing cluster inside activities or for worker node.
"eksctl" tool helps to connect with master through "kubeAPI" but we Master node is fully managed by aws.
Search on browser 'kubectl install window' -->> 'Install kubectl binary with curl on Windows ' -->> copy command and paste on local system.
curl.exe -LO "https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe"
We can check using command:
kubectl version
For creating Cluster we use "eksctl" command and for doing anything inside cluster we use "kubectl" command.
Create kubernetes cluster we use help command for showing option:
eksctl create cluster --help
Create eks cluster using option command:
eksctl create cluster --name pscluster --region ap-south-1 --version 1.30 --nodegroup-name psnodegp --instance-types t2.micro --nodes 3 --nodes-min 3 --nodes-max 6 --node-volume-size 8 --node-volume-type gp3 --ssh-access --enable-ssm --instance-name psworkernode --managed
If we want to launch OS, Server, App then that entire Software we bundle in one box or software called as "Image" and that image in Container world called as "Container Image".
If we want to launch app/container/pod with help of image we use term as "deployment" in K8S world.
Command for create deployment/container in AWS EKS:
kubectl create deployment psapp --image=vimal13/apache-webserver-php
We can check pods using kubectl command:
kubectl get pods
We can check entire info of pods using command:
kubectl get pods -o wide
We can also direct connect to POD (Container) from laptop:
kubectl exec -it psapp bash
Master node keeps on monitoring "pod" because there is a program running in "worker node" who communicates with master that program is known as "kubelet", This is also managed by EKS.
If we delete pod or any fault occures and pod goes down then Master node automatically launch same pod at any node, at any node means master kube-schedular program keep on monitoring on worker node which is free, that node use for launch pod.
Kubernetes have their own load balancer, but if we want to use other load balancer then plugin need for "vanilla kubernetes" but while using "amazon EKS" give precreated plugin for using aws services like Load balancer(ELB).
Command for get Load Balancer list:
kubectl get svc
Command for check create load balancer/expose deployment option:
kubectl expose deployment --help
Command for Create LB:
kubectl expose deployment psapp --name pslb --type=LoadBalancer --port 80
After creating load balancer we get "EXTERNAL-IP" that we can use as link on browser:
kubectl get svc
Kubernetes give us fantastic option that "Scale", by using horizontal Scaling we can scale-out and scale-in our deployment:
kubectl scale deployment psapp --replicas=4
we can also see on which "node" our pod from CLI:
kubectl get pods -o wide
we can see our Elastic Load balancer "EXTERNAL-IP" to access our psapp(pod/container):
Paste "EXTERNAL-IP" that get from "kubectl get svc" command and we access our psapp through loadBalancer:
- Note:
From below screenshoot we can see that our load balancer work, every time we connect new pod:
now check on aws console our cluster creates: aws Dashbord-->> EKS
EC2 worker node also created:
- Note:
Here we can see that our instances lanuch at different " availability zones" because we use "nodegroup" while creating Cluster, EKS is very intelligent, every node launch in differnt AZ because i any AZ goes down then our other AZ our app work:
Here, for worker node our local laptop Public key is attached because we use "--ssh-access" and i can access Cluster node instance from local machine and manage.
ssh ec2-user@(Public_IP)
- Load Balancer created:
-
VPC also created by EKS automatic(VPC give IP range subnet for our node,pod):
AWS has its own personal plugin called "VPC" that is used for K8S by EKS.
every VPC has subnets and every subnet gives IP adress range.
AWS EKS master node all monitoring activity ,If we want delete entire cluster then we use only one following command:
eksctl delete cluster --name pscluster --region ap-south-1