-
Notifications
You must be signed in to change notification settings - Fork 5
/
Copy pathRunning Container Clusters with Kubernetes
441 lines (348 loc) · 13.5 KB
/
Running Container Clusters with Kubernetes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
Introduction to kubernetes
Commonly referred to its internal designation during the development
at Google(k8s),
kubernetes is an open source container cluster manager.
kubernetes primary goal is to provide a platform for
automating deployment, scaling and operations of application
container across a cluster of hosts.
components - Building Block of kubernetes
Nodes
Pods
Labels
Selectors
Controllers
Services
Control Pane
API
Nodes(Minions)
You can think of these as "container Client".These are the individual
hosts (physical or virtual) that Docker is installed on the hosts the
various containers within your managed cluster.
Each Minion will run ETCD (a key pair management and communication service, used
by kubernetes for exchanging messages and reporting on cluster status)
as well as the Kubernetes Proxy.
Pods
A pod consists of one or more container. Those containers are guaranteed
(by the cluster controller) to be located on the same host machine in order to
facilitate sharing of resources
Pods are assigned unique IPs within each cluster. These allow an application to used
ports without having to worry about conflicting port utilization.
Pods can contains definitions of disk volumes or share, and then provide access from
those to all the members(containers) within the pod.
Finally, pod management is done through the APT or delegated to a controller
Labels
Clients can attach "key-value pairs" to any object in the system (like pods or Nodes(minions)).
These become the labels that identify them in the configuration and management of them.
Selectors
Label Selectors represent queries that are made against those labels.
They resolve to the corresponding matching objects.
These two items are the primary ways that grouping is done in kubernetes and determine
which components that a given operations applies to when indicated.
Controllers
These are used in the management of your cluster. Controllers are the mechanism by which
your desired configuration state is enforced.
Controllers manage a set of pods and, depending on the desired configuration state, may
engage other controller to handle replication and scaling (Replication Controller)
of xx number of container and pods across the cluster.
it is also responsible for replacing any container in a pod
that fails(based on the desired state of the cluster).
Other controllers that can be engaged including a DaemonSet Controller(enforces a 1 to 1 ratio
of pods to minions) and a job Controller (that runs pods to "completion", such as in batch jobs).
Each set of pods any controller manages, is determined by the label selectors that are part of its definition.
Services
A pod consists of one or more container. Those containers are guaranteed
(by the cluster controller) to be located on the same host machine in order to
facilitate sharing of resources
This is so that pods can 'work together', like in a multi-tiered application configuration. Each set of Pods
that define and implement a service (like MYSQL or Apache) are defined by the label selectors.
Kubernetes can then provide service discovery and handle routing with the static IP for
each pod as well as load balancing (round robin based) connections to that service among the pods that
match the label selectors indicated.
By default, although a service is only exposed inside a cluster, it can also be exposed outside a cluster
as needed.
Install and Configure Master Controller
Packages and Dependencies
yum install -y ntp ( to perfect synchronize time all over the serves)
systemctl enable ntpd && systemctl start ntpd && systemctl status ntpd
Add vi /etc/hosts
ip address kubernetes:Master
Ip address kubernetes:Nodes
adding the repository
vi /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
yum update
yum install --enablerepo=virt7-docker-common-release etcd kubernetes docker
configuration of kubernetes
vi /etc/kubernetes/config add below lines
KUBE_MASTER="--master=http://IP ADDRESS OF MASTER:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://IP ADDRESS OF MASTER:23719"
configuration of ETCD server
vi /etc/etcd/etcd.conf add below lines
#[Member]
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[Clustering]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
configuration of API server
vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
change to
KUBE_API_ADDRESS="--address=0.0.0.0"
uncomment the KUBE_API_PORT="--port=8080"
uncomment the KUBELET_PORT="--kubelet-port=10250"
you can change KUBE_SERVICE_ADDRESSES if want to have different CIDR range
comment the KUBE_ADMISSION_CONTROL
Start Kubernetes
systemctl enable etcd kube-apiserver kube-controller-manager kube-scheduler
systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler
systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep
"(running)" | wc -l
Install and Configure the Minions
Packages and Dependencies
yum install -y ntp ( to perfect synchronize time all over the serves)
systemctl enable ntpd && systemctl start ntpd && systemctl status ntpd
Add vi /etc/hosts
ip address kubernetes:Master
Ip address kubernetes:Nodes
adding the repository
vi /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
yum update
yum install --enablerepo=virt7-docker-common-release etcd kubernetes docker
configuration of kubernetes nodes
vi /etc/kubernetes/config add below lines
KUBE_MASTER="--master=http://IP ADDRESS OF MASTER:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://IP ADDRESS OF MASTER:23719"
vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=hostname" #for mutliple host use $HOSTNAME --needs to be tested
KUBELET_API_SERVER="--api-servers=http://IP ADDRESS OF MASTER:8080"
uncomment KUBELET_PORT="--port=10250"
comment KUBELET_POD_INFRA_CONTAINER
systemctl enable kube-proxy kubelet docker
systemctl start kubelet docker kube-proxy
systemctl restart kubelet docker kube-proxy
Check the node are connected to master by command on master
kubectl get nodes
Create and Deploy Pod
Check before deploying any pods are present by
kubctl get pods
Create folder Builds
cd Builds
Creating pods
Now we are creating nginx.yml for deploying the pod
For better writing ymal code use vim editor
vim nginx.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx: 1.7.9
ports:
- containerPort: 80
save and close the file
To create a pod use the blow the command
kubectl create -f ./nginx.yml
kubectl get pods (Now you can see nginx pod is running.)
Try to login to node server check is docker continer is running
when you check the nodes you can 2 continer is running
1 is our nignx continer
2 is google pause continer
If you want to describe the Pods use
kubectl describe pods (For all continers)
kubectl describe pod nginx ( For spefic continers)
By using this command it tell us many things such as
Container ID ,Image Name
Port, node agssin to, IP addres etc...
Your not able to ping the IP address of the pods externally
Unless you want see those continers we can deploy another continer in the same host system
kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1
Above command create a container called busybox and login to the command prompt
Now your in the busybox container you use wget IP address of the nginx pod
to check the pod is working or not (this busybox should create in the same node where nginx is also created)
wget -q0- http://ipaddress
we can have access to the continer inside pod internally
After checking you can delete the continer by
kubectl delete pod busybox
There is an another way to acess the pod by using port forwarding
kubectl port-forward nginx :80 &
In above command we are not specifying the port it just tell on which port
it is rerouted. ( For better understading excute the above cammond and see)
for example check above command
wget -q0- http://localhost:34853(port automated generated)
Tags,Labels and Selectors
Now we build the Pod definition let take look the way we can orginize the pods using Tags, Lables and Selectors
Adding label to Pod
vi nginx-pod-label.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx2
labels:
app: nginx2 ( For adding lables we need to add this section)
spec:
containers:
- name: nginx2
image: nginx:1.7.9
ports:
containerPort: 80
Now create new pod
kubectl create -f nginx-pod-label.yml
To check the lables use
kubectl get pods -l app=nginx
kubectl describe pods -l app=nginx
Deployment State
Deployment gives us more fexliblity then the pods
We are going to deploy Production nginx container and Development nginx container with the pod
and we are going to update one pod but not other
vi nginx-deployment-prod.yml
apiVersion: extensions/v1betal (Check in Kubernetes website for deployment version)
kind: Deployment
metadata:
name: nginx-deployment-prod
spec:
replicas: 1 ( Deployments are often use deploy multiples pods at same time)
template:
metadate:
labels:
app: nginx-deployment-prod
spec:
containers:
- name: nginx-deployment-prod
image: nginx:1.7.9
ports:
- containerPort: 80
Now Create Deployment Pod
kubectl create -f nginx-deployment-prod.yml
Kubectl get pods
kubectl get deployments (to see deployment)
Now copy nginx-deployment-prod.yml nginx-deployment-dev.yml
vi nginx-deployment-dev.yml
apiVesrion: extensions/v1betal
kind: Deployment
metadata:
name: nginx-deployment-dev
spec:
repicas: 1
template:
metadata:
labels:
app: nginx-deployment-dev
spec:
containers:
- name: nginx-deployment-dev
image: nginx:1.7.9
ports:
- containerPort: 80
Here I'm showing deffrence between deployment vs regular Pod
ployment
Kubectl create -f nginx-deployment-dev.yml
kubectl get deployments
kubectl describe deployment -l app=nginx-deployment-dev.yml
By using deployment type we can now update any pods using deployment update
we are going to update nginx -dev
cp nginx-deployment-dev.yml nginx-deployment-dev-update.yml
vi nginx-deployment-dev-update.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment-dev
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-deployment-dev
spec:
containers:
- name: nginx-deployment-dev
image: nginx:1.8
port:
containerPort: 80
kubectl apply -f nginx-deployment-dev-update.yml
kubectl get pods
kubectl describe deployment -l app=nginx-deployment-dev
you can see Rolling update
By single command you can update your container infrastrcture
Mutli-Pod(Container) Replication controller
There are two ways to run Mutliple containers
1st way is multiple container doing diffrent things layered application
runnning webserver and file server etc..
2nd multiple container in a pod the way we use is using replication controller
Replication controller is different kind of deployment used to deploy 1 to n Pods
Vi nginx-multi-label.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-www
spec:
relicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerport: 80
kubctl create -f nginx-multi-label.yml
kubctl get pods you can see multiple replicas of the containers accorss the nodes
you can check your replication controller by using command
kubctl describe replicationcontroller
kubectl describe pods (look the oupt in other nodes by docker ps command)
kubectl get services
At present you have only 1 service is running i.e kubernetes service
you can see the cluster ip and port of the defult service
Now you can test your replication controller by deleting one of the pods
kubctl delete pods
check even after delete of the one pod it is created again automatically
kubectl get replicationcontrollers
kubectl delete replicationcontrollers nginx-www
Now it is deleted
Create and Deploy Service Definitions
kubectl get replicationcontrollers
kubectl get pods
kubectl get nodes
Service definitions starts to tie together
we got 3 pods across the 3 minions and there not exposed except to other pods depolyed in
the same host however when we define a service we actually creating a cluster that refers to resources that exist on any of the nodes(minions)
Kubernetes will assgin a single ip for all 3 pods so that we can connect to them do somethig
By using that cluster Ip address and port we can access the containers in the minions
Building yml file for service Definition
vi nginx-service.yml
apiVesrion: v1
kind : Service
metadata:
name: nginx-service
spec:
ports:
- port: 8000
targetPort: 80
protocol: TCP
selecor:
app: nginx
kubectl creat -f nginx-service.yml
kubectl get services
There you can find the Ip address of the service where you can acess the 3 continers
kubectl describe service nginx-service
Here we are loadbalancing the the all containers
to connect this service we need use busy box
kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Nerver --tty -i
wget -q0- http://service ip cluster:8000
kubectl delete pod busybox
kubectl delete service nginx-service
but still even after delete of the service we can have the individual container running
on nodes