-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathHELP
824 lines (562 loc) · 50.7 KB
/
HELP
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
TODO creer une machine pour hoster un system de fichier nfs !!!
prio:
- des que je me connecte en ssh la virtual machine se "plante" !!! et la resolution ne fonctionne plus :S
cachesize atteint ...
tail -f /var/log/messages
si je vire la dclaration du dns depuis mon HOST tout redeviens normale ... mais je ne peux plus faire de dns vers mes ip ...
Putain je suis con ! j'ai crée une boucle avec le bridge ip qui fait appelle à mon dns et cela en boucle !!
- loop pour tester les dns ...
- sysctl -w net.ipv4.ip_forward=1
- je devris tester le wildcard dns ... test.cluser.....
nmcli dev show eth0 | grep IP4
https://api.sandbox.okd.local:6443/
depuis le bootstrap ssh [email protected]
curl -k -v https://localhost:22623/config/master
OK donc tester avec 3 controle plane et 2 workers ...
c'est certainement liée au non respect ...
worker KO !!!!!!!!!!!!!!!!!!!!!!!!!!
/var/log/pods/default_bootstrap-machine-config-operator-bootstrap.sandbox.okd.local_786b1ef2b1954aa7196c3660a8186ff8/machine-config-server
refusing to serve bootstrap configuration to pool "worker"
2020-04-05T21:39:56.812099836+00:00 stderr F E0405 21:39:56.812063 1 api.go:103] couldn't get config for req: {worker}, error: refusing to serve bootstrap configuration to pool "worker"
https://github.com/openshift/machine-config-operator/blob/master/pkg/server/api.go
/etc/mcs/bootstrap/machine-configs
/etc/mcs/bootstrap/machine-pools
/// => ici le message !!
https://github.com/openshift/machine-config-operator/blob/671a5546fd3445b50ea5f4259378683cd329025f/pkg/server/bootstrap_server.go
>
func (bsc *bootstrapServer) GetConfig(cr poolRequest) (*runtime.RawExtension, error) {
if cr.machineConfigPool != "master" {
return nil, fmt.Errorf("refusing to serve bootstrap configuration to pool %q", cr.machineConfigPool)
}
///
Putain voici le commit :
https://github.com/openshift/machine-config-operator/commit/c58b8f9687dd6ea8b245fa8901bc0ab3cc99d439
Hello,
when I try to install okd, my worker is failing as each time it try to retrieve the worker config it fails due to this commit.
I got
```
[7323.218867] ignition[551]: GET error: Get https://api-int.sandbox.okd.local:22623/config/worker: EOF
```
And following logs:
```
2020-04-05T21:39:56.812099836+00:00 stderr F E0405 21:39:56.812063 1 api.go:103] couldn't get config for req: {worker}, error: refusing to serve bootstrap configuration to pool "worker"
```
The master node is ok.
Following your commit how can I make my worker retrieving is configuration ? I checked in my master (ie control plane) but mcs is not present and ignitions files are not presents too.
Regards,
Damien
curl -k -v https://api-int.sandbox.okd.local:22623/config/worker
ssh [email protected] connection refused ...
master OK je dois determiner comment celui ci est up
curl -k -v https://api-int.sandbox.okd.local:22623/config/master
ssh [email protected] connection refused ...
La connection est accessible une fois que le fichier à merger dans la conf est recuperé
dans mes conf j'ai une notion de merge ...
"merge": [
{
"source": "https://api-int.sandbox.okd.local:22623/config/worker"
}
]
peut être qu'il est en attente et une fois recu il applique la conf ...
# TODO prefixer avec le numero de cluster pour worker et control plane ! et bootstrap !!!
putain une fois que bootstrap a redemarre mes
https://api.sandbox.okd.local:6443/version?timeout=32s:
10.0.5.57 envoie vers le loadbalander
6443
backend ocp4_k8s_api_be
balance roundrobin
mode tcp
server bootstrap 10.0.5.58:6443 check
server control-plane-0 10.0.5.59:6443 check
docker run -ti giantswarm/tiny-tools sh
https://jamielinux.com/docs/libvirt-networking-handbook/custom-nat-based-network.html
rpm-ostree install telnet
nmcli dev show
dans le cluster okd !!! partie dnsmasq
Rajouter comme configuration
touch /etc/dnsmasq.d/virbr2.conf && \0
echo "except-interface=virbr2" >> /etc/dnsmasq.d/virbr2.conf && \
systemctl restart dnsmasq
service dnsmasq restart
journalctl -u dnsmasq
iptables -A INPUT -i virbr2 -p udp -m udp -m multiport --dports 53,67 -j ACCEPT && \
iptables -A INPUT -i virbr2 -p tcp -m tcp -m multiport --dports 53,67 -j ACCEPT
iptables -t nat -I POSTROUTING -o wlp82s0 -j MASQUERADE
ip neighbor
dnsmasq --conf-file=/var/lib/dnsmasq/virbr2/dnsmasq.conf --pid-file=/var/run/virbr2.pid
> dnsmasq 10179 nobody 7u IPv4 117760 0t0 TCP 10.0.6.1:53 (LISTEN)
Youpi !!!
lsof -i -P -n |grep dnsmasq
# libvert change is setup
strict-order
domain=sandbox.okd.local
expand-hosts
pid-file=/var/run/libvirt/network/okd-dns0.pid
except-interface=lo
bind-dynamic
interface=virbr2
dhcp-range=10.0.6.10,10.0.6.254
dhcp-no-override
dhcp-authoritative
dhcp-lease-max=245
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/okd-dns0.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/okd-dns0.addnhosts
fix an issue when we want the load balancer virtual machine to communicate with the dns virtual machine (Destination Port Unreachable)
iptables -I FORWARD 1 -j ACCEPT
putain il faut que je rajoute ces regles et si je redemmare je suis dans le mouise... car elle vont disparaitre ou pas se
retrouver dans la meme priorité ...
passer par un hook au demarrage de la virtual machine pour reecrire la table iptable !!!
iptables -I FORWARD 1 -d 10.0.6.0/24 -j ACCEPT && \
iptables -I FORWARD 1 -s 10.0.6.0/24 -j ACCEPT && \
iptables -I FORWARD 1 -d 10.0.5.0/24 -j ACCEPT && \
iptables -I FORWARD 1 -s 10.0.5.0/24 -j ACCEPT
iptables -L -v --line-numbers
systemctl restart iptables
ok il faut que j'utilise un network open !
systemctl restart iptables
systemctl status iptables
iptables -D FORWARD 4
iptables -F
il faut envoyer un SIGHUP sur le process root de libvirt ... kill -1 45737
et du coup toutes mes tables de routage changes et bingo cela ne marche plus !!!
/var/lib/libvirt/dnsmasq
./openshift-install wait-for bootstrap-complete --log-level debug
./openshift-install wait-for install-complete --log-level debug
faire un script pour preparer le Host : soit ipv4 forward ...
https://console-openshift-console.apps.sandbox.okd.local
# ok go pour cette configuration
# https://github.com/openshift/okd/issues/28
# https://builds.coreos.fedoraproject.org/browser
# testing
# https://builds.coreos.fedoraproject.org/prod/streams/testing/builds/31.20191217.2.0/x86_64/fedora-coreos-31.20191217.2.0-qemu.x86_64.qcow2.xz
# okd_version: "4.3.0-0.okd-2019-11-15-182656"
# fedora_coreos_cloud_image_name: "fedora-coreos-31.20191217.2.0-qemu.x86_64.qcow2"
# fedora_coreos_cloud_image_name_archive: "fedora-coreos-31.20191217.2.0-qemu.x86_64.qcow2.xz"
# okd_version: "4.4.0-0.okd-2020-01-28-022517"
#fedora_coreos_cloud_image_name: "fedora-coreos-31.20191217.2.0-qemu.x86_64.qcow2" marche pas
#fedora_coreos_cloud_image_name: "fedora-coreos-31.20200310.3.0-qemu.x86_64.qcow2" marche pas
#fedora_coreos_cloud_image_name: "fedora-coreos-31.20200127.3.0-qemu.x86_64.qcow2" marche pas
# fedora_coreos_cloud_image_name: "fedora-coreos-31.20200323.2.1-qemu.x86_64.qcow2" # marche pas
# Trouver pourquoi !!!!! bootstrap ne demarre pas !!!! OK elle demarre si je n'ai pas le fichier d'ignition... voir pourquoi
# Tester avec un fichier d'ignition simple (worker) OK il passe
# Trouver ou cela pose probleme dans le fichier de bootstrap !!! Et la galère continue !!! Comment debugger ignition
# Error at $.storage.file.48 line1 col 112716 duplicate entry defined
# Error at $.storage.file.49 line1 col 114217 duplicate entry defined
# ok je dois corriger via mon script python ceci : https://github.com/openshift/installer/pull/3078
# doublon: /opt/openshift/openshift/99_openshift-machineconfig_99-master-ssh.yaml et /opt/openshift/openshift/99_openshift-machineconfig_99-worker-ssh.yaml
# remove: /usr/local/bin/report-progress.sh ATTENTION: generer le fichier avec la derniere version puis VERIFIER !
# fix storage issue when generating bootstrap.ign
# Error at $.storage.file.48 line1 col 112716 duplicate entry defined
# Error at $.storage.file.49 line1 col 114217 duplicate entry defined
# /opt/openshift/openshift/99_openshift-machineconfig_99-master-ssh.yaml
# /opt/openshift/openshift/99_openshift-machineconfig_99-worker-ssh.yaml
# storage = data["storage"]
# print getattr(storage, "file")
# unique = { each["path"] : each for each in data["storage"]["file"] }.values()
# print unique
// controle plane ????
containers with unready status: [machine-config-daemon oauth-proxy]
Configurer un proxy registry
> creer une machine container-registry
- meme niveau que dns
> tester avec une image fcos (un autre master)
> verifier que le container registry contient les images :)
> faire du docker pull dans l'image vis à vis de
https://computingforgeeks.com/create-docker-container-registry-with-podman-letsencrypt/
/etc/containers/registries
unqualified-search-registries = ['registry.access.redhat.com', 'docker.io']
registries = ['myregistry.local','registry.computingforgeeks.com:5000']
-rw-r--r--. 1 root root 76 6 avril 21:14 registries.conf
- "{name: 'aws-machine-controllers' , image: 'quay.io/openshift/okd-content@sha256:2a39cd7f86fd2ecc98d65e0a84c93d8263ecf31aafb3d49b138a84192301f092'}",
- "{name: 'azure-machine-controllers' , image: 'quay.io/openshift/okd-content@sha256:81939c4826f3f497833b0761d42ad2e611f7e9180a9117a97ae7f4c78f1fe254'}",
- "{name: 'baremetal-installer' , image: 'quay.io/openshift/okd-content@sha256:ddf9dc1dc735552dcab0ce454853c3dd51258fca2481693fae90137a14c07531'}",
- "{name: 'baremetal-machine-controllers' , image: 'quay.io/openshift/okd-content@sha256:227fd1bb9185e667de520c3428e07c2a3b19f47a30f3770a06611d4d9d1901a4'}",
- "{name: 'baremetal-operator' , image: 'quay.io/openshift/okd-content@sha256:227fd1bb9185e667de520c3428e07c2a3b19f47a30f3770a06611d4d9d1901a4'}",
- "{name: 'baremetal-runtimecfg' , image: 'quay.io/openshift/okd-content@sha256:ecdbba18b3a4575af45509048455a268e979f693341443ba942c152ea3dfaf49'}",
- "{name: 'branding' , image: 'quay.io/openshift/okd-content@sha256:d112c2c077ad6d1c80f16a93226adbb9d88a8c26d8b0ddac4ca699d11e24f647'}",
- "{name: 'cli' , image: 'quay.io/openshift/okd-content@sha256:8405a909473c93a51076d0c70f82c9892fb5dc8474ae15d5bdb6a00018e0075f'}",
- "{name: 'cli-artifacts' , image: 'quay.io/openshift/okd-content@sha256:7fb8e76f764d70ceecdc66dd053e4189d46fbe70946044c44918d9940a2633f7'}",
- "{name: 'cloud-credential-operator' , image: 'quay.io/openshift/okd-content@sha256:69fa3062f468292f01566f2cb925d36863cfbe3ccfd0a3dbe46bb2e26b0337e8'}",
- "{name: 'cluster-authentication-operator' , image: 'quay.io/openshift/okd-content@sha256:41166e4dcdb7d1fe8d5317dc3c2c201fe103d14a19778cac5da2592d10599721'}",
- "{name: 'cluster-autoscaler' , image: 'quay.io/openshift/okd-content@sha256:11e29837ccd0d64cfb19091cf56008730ddc86d6620b3aa889bd9c2014335117'}",
- "{name: 'cluster-autoscaler-operator' , image: 'quay.io/openshift/okd-content@sha256:59b1110c495834bfe18c9889c47129f41bd58eb12d805f003d54e81b0ad6d918'}",
- "{name: 'cluster-bootstrap' , image: 'quay.io/openshift/okd-content@sha256:e80f6486d558776d8b331d7856f5ba3bbaff476764b395e82d279ce86c6bb11d'}",
- "{name: 'cluster-config-operator' , image: 'quay.io/openshift/okd-content@sha256:5d34fe2831513a4388362583ab6ab6d856aeb12907250f8432824753a6437a01'}",
- "{name: 'cluster-csi-snapshot-controller-operator' , image: 'quay.io/openshift/okd-content@sha256:3dd3139a0344968315538ce4d7d0aff943cbfd48bde4c6faccdc1e4c80bb5d2c'}",
- "{name: 'cluster-dns-operator' , image: 'quay.io/openshift/okd-content@sha256:ff233dcd19fa2467fb0e7446c457f829eae9b1b37eccd75e2bb26ac52972bd51'}",
- "{name: 'cluster-etcd-operator' , image: 'quay.io/openshift/okd-content@sha256:4001104492ffe03d86f573f7cc89f16683cdccd1d86f4ea0d21bc401bac2a692'}",
- "{name: 'cluster-image-registry-operator' , image: 'quay.io/openshift/okd-content@sha256:116f9a64268036c6fdfa35ef51f0e7bbc5abe200e2fac792e9674b1cea7671ac'}",
- "{name: 'cluster-ingress-operator' , image: 'quay.io/openshift/okd-content@sha256:b05542871710fb7af17cdb4dbc49a951cfb727d7e4242d73efb5289bb3b385bb'}",
- "{name: 'cluster-kube-apiserver-operator' , image: 'quay.io/openshift/okd-content@sha256:fd23da11030623abd6f9f2c730e856e73ba9c7da3b447c6990ee852132af46dd'}",
- "{name: 'cluster-kube-controller-manager-operator' , image: 'quay.io/openshift/okd-content@sha256:f3130930b6745b8fd481c090103f3d75492f9728f58c331f5bd8172464911c43'}",
- "{name: 'cluster-kube-scheduler-operator' , image: 'quay.io/openshift/okd-content@sha256:18d124f9a553efd8a50a5beb4ec76aa58cc3a2d5f45598893dc85c97f081c6a8'}",
- "{name: 'cluster-kube-storage-version-migrator-operator' , image: 'quay.io/openshift/okd-content@sha256:972af34c5eb404edb7048b9071775327f51e6196cb4a6cd0e3544fa4f022ffe2'}",
- "{name: 'cluster-machine-approver' , image: 'quay.io/openshift/okd-content@sha256:15b26a88c1d225efb7c62126bb55f0604ee49f5bc6e54eafca7f48f0e88b8218'}",
- "{name: 'cluster-monitoring-operator' , image: 'quay.io/openshift/okd-content@sha256:35aafd601da44f8c39062fb9c5a5c21f420a5cbc1dfa8f5b9a80827ddd68927f'}",
- "{name: 'cluster-network-operator' , image: 'quay.io/openshift/okd-content@sha256:c11a2f8d7aef45ae58c550bd40b44e14fca1ec86aaa161616996412d0b16c71f'}",
- "{name: 'cluster-node-tuned' , image: 'quay.io/openshift/okd-content@sha256:1e23f784aafacbb7b7671b9b9e7af72efbc6bdbae42c0d8ac33f5658f51b070c'}",
- "{name: 'cluster-node-tuning-operator' , image: 'quay.io/openshift/okd-content@sha256:d9070ce78d5bb44255c90e74110e5ffc606adf7f8c61186ac74ca386882fae35'}",
- "{name: 'cluster-openshift-apiserver-operator' , image: 'quay.io/openshift/okd-content@sha256:819beecb79c15b080a9cba67a44f276e22b8f267799cbd118011851af0e75dae'}",
- "{name: 'cluster-openshift-controller-manager-operator' , image: 'quay.io/openshift/okd-content@sha256:d38661bbd098219d314c04fbf27b124915f6bb25995fff308b62ef40e9665b6a'}",
- "{name: 'cluster-policy-controller' , image: 'quay.io/openshift/okd-content@sha256:e8155385777b43c6ad1f225ec8d57b8898a571ce633e71bf257d45d67a9abb92'}",
- "{name: 'cluster-samples-operator' , image: 'quay.io/openshift/okd-content@sha256:fafac1a76b2a42956f5c6d06b88d4d6653af8001f006a996155ec45403f41590'}",
- "{name: 'cluster-storage-operator' , image: 'quay.io/openshift/okd-content@sha256:92e76e6ba72ba01decfd128856bafd7a93dc722fed929735fa74d43fc3845f3b'}",
- "{name: 'cluster-svcat-apiserver-operator' , image: 'quay.io/openshift/okd-content@sha256:bd424eb7b7e2408165636bf597a98bf1b7da5eb896e6d81e5cbf5d984ec0a576'}",
- "{name: 'cluster-svcat-controller-manager-operator' , image: 'quay.io/openshift/okd-content@sha256:8f1fd27114eadcdeb86b1175e2557448e45276118c4ce906444fbbe5b0250943'}",
- "{name: 'cluster-update-keys' , image: 'quay.io/openshift/okd-content@sha256:7b0812c67a584309ce055a7dc00a4852bf801f3c5068ef63ade3de9993a4c22b'}",
- "{name: 'cluster-version-operator' , image: 'quay.io/openshift/okd-content@sha256:69eeb4c69b035e93a0585541a76ef5991712b1d1c498e13f4809349cd1943616'}",
- "{name: 'configmap-reloader' , image: 'quay.io/openshift/okd-content@sha256:5a80db4af2259ef884bfbcabb14d4938cd6c20a7fbad141b914400ef33cf8523'}",
- "{name: 'console' , image: 'quay.io/openshift/okd-content@sha256:8e47cf46ed255ca1ed324b0bb97a615ddd81324c5d5ca6acc84b23e3a9ef14bf'}",
- "{name: 'console-operator' , image: 'quay.io/openshift/okd-content@sha256:49dca4d9d78082f52dc4693a7b99add7e3256cde78eeb14418496e47017ed492'}",
- "{name: 'container-networking-plugins' , image: 'quay.io/openshift/okd-content@sha256:0bf6503fa80d9ce976a995dcf9b2b01927b919ae47111e36d063d28af6276974'}",
- "{name: 'coredns' , image: 'quay.io/openshift/okd-content@sha256:cd54d1f80d0672638442ffc9076e581c16f6189934deff5dbd50afb9d2a63757'}",
- "{name: 'csi-snapshot-controller' , image: 'quay.io/openshift/okd-content@sha256:3a816a9185ca1ca9a4be461f6fc59133d863e939ef6e26099922eaeb610feacf'}",
- "{name: 'deployer' , image: 'quay.io/openshift/okd-content@sha256:c33e6efdc7f47a8e952d2c993f76af51f01bcfe57e03a77bb970c7e186b3af4b'}",
- "{name: 'docker-builder' , image: 'quay.io/openshift/okd-content@sha256:30512b4dcc153cda7e957155f12676842a2ac2567145242d18857e2c39b93e60'}",
- "{name: 'docker-registry' , image: 'quay.io/openshift/okd-content@sha256:9dd0e622153b441f50f201ed98c92f62d030884583ac6abda5fb41d5645c8b2e'}",
- "{name: 'etcd' , image: 'quay.io/openshift/okd-content@sha256:5b25b115fc463152998c0b55f07d7aa3d4a15f5167f77b9dd976ff243f478278'}",
- "{name: 'gcp-machine-controllers' , image: 'quay.io/openshift/okd-content@sha256:153f135b6e0719217d6798eff7328a87027604442afe2768caaead1e2dae6247'}",
- "{name: 'grafana' , image: 'quay.io/openshift/okd-content@sha256:9cbe5048f0dd799171320ba7e1e83f3cddf2956282a7665e448768eaffd21ecf'}",
- "{name: 'haproxy-router' , image: 'quay.io/openshift/okd-content@sha256:a00e1f0792908c6f9d41a9407e05da36e78a9be8594330f982689f444c382e82'}",
- "{name: 'hyperkube' , image: 'quay.io/openshift/okd-content@sha256:4392b2a41cc6873d0b1c41530b2a817b76737000b5a6fe4d08af91b0943a6580'}",
- "{name: 'insights-operator' , image: 'quay.io/openshift/okd-content@sha256:c7477458411085dc660e598881b9d9edd1eab5650a9551db4cfc80337ac6e5b0'}",
- "{name: 'installer' , image: 'quay.io/openshift/okd-content@sha256:6e878baf4444640774582d1dd68659b19db5c192ac5ed31a46ab95029918b765'}",
- "{name: 'installer-artifacts' , image: 'quay.io/openshift/okd-content@sha256:7fc51300aa4ddfe11b3bb0c2343c4c0ac71f905f4419a57b0fcbef1912330b8c'}",
- "{name: 'ironic' , image: 'quay.io/openshift/okd-content@sha256:227fd1bb9185e667de520c3428e07c2a3b19f47a30f3770a06611d4d9d1901a4'}",
- "{name: 'ironic-hardware-inventory-recorder' , image: 'quay.io/openshift/okd-content@sha256:55fcc7142bcc34f208bf7a69237e6bae732206490dbdf25e93fcb2247e573625'}",
- "{name: 'ironic-inspector' , image: 'quay.io/openshift/okd-content@sha256:227fd1bb9185e667de520c3428e07c2a3b19f47a30f3770a06611d4d9d1901a4'}",
- "{name: 'ironic-ipa-downloader' , image: 'quay.io/openshift/okd-content@sha256:227fd1bb9185e667de520c3428e07c2a3b19f47a30f3770a06611d4d9d1901a4'}",
- "{name: 'ironic-machine-os-downloader' , image: 'quay.io/openshift/okd-content@sha256:227fd1bb9185e667de520c3428e07c2a3b19f47a30f3770a06611d4d9d1901a4'}",
- "{name: 'ironic-static-ip-manager' , image: 'quay.io/openshift/okd-content@sha256:227fd1bb9185e667de520c3428e07c2a3b19f47a30f3770a06611d4d9d1901a4'}",
- "{name: 'jenkins' , image: 'quay.io/openshift/okd-content@sha256:84adf8da7f1c858de02f31f2e38f6a60e805090c6a476390c691a71415700ef4'}",
- "{name: 'jenkins-agent-maven' , image: 'quay.io/openshift/okd-content@sha256:68e224cd555e20d10b74f06577d0dcd9347f2e55beac37ef1232ded3afea4020'}",
- "{name: 'jenkins-agent-nodejs' , image: 'quay.io/openshift/okd-content@sha256:0f79b3e519d192c6a5c481d452328e20c3698ef58296d978d5f78f96ccee8b82'}",
- "{name: 'k8s-prometheus-adapter' , image: 'quay.io/openshift/okd-content@sha256:12bac47c71cb7ef36b6ee7b78e0476fbfb8a67bbf61ac42c461c17c98ac850a6'}",
- "{name: 'keepalived-ipfailover' , image: 'quay.io/openshift/okd-content@sha256:2a8ef3288162925ad6ff20a440b66046c067cf20c41d5b814004d13a0ececfe1'}",
- "{name: 'kube-client-agent' , image: 'quay.io/openshift/okd-content@sha256:801b64e523315d208a4cbb513a53558a5984630603709e15997de19ca83a14ad'}",
- "{name: 'kube-etcd-signer-server' , image: 'quay.io/openshift/okd-content@sha256:8755e700accb5b6d92fd7d2c7b7a6252ed62f843f06fc31812b415a0ac47e0e1'}",
- "{name: 'kube-proxy' , image: 'quay.io/openshift/okd-content@sha256:bb7f85dd7923b3c3eceb31114ec77d152dac4bf391a20780458144017e86fc54'}",
- "{name: 'kube-rbac-proxy' , image: 'quay.io/openshift/okd-content@sha256:4da76173cdd5d8699be46fcaba2c5911f83e9f2dc33b2c47768fda2df5415f1c'}",
- "{name: 'kube-state-metrics' , image: 'quay.io/openshift/okd-content@sha256:db5ab8e8904d7867a714d08578746ecc867456bc339c79e56546866599766229'}",
- "{name: 'kube-storage-version-migrator' , image: 'quay.io/openshift/okd-content@sha256:154e22e58ac70907207106b431629cf43f7f771b230df438143e18f6a6781a58'}",
- "{name: 'kuryr-cni' , image: 'quay.io/openshift/okd-content@sha256:509215475796b5c652f3b25399f38f3303365af1547c691a06add1022f48466d'}",
- "{name: 'kuryr-controller' , image: 'quay.io/openshift/okd-content@sha256:78c5e0895ae1262ab834a821dcd638d2241db6a581408023507c8b88573bdc01'}",
- "{name: 'libvirt-machine-controllers' , image: 'quay.io/openshift/okd-content@sha256:b9d78a6300ae7d414aa2e4cb3353416d1c12c28ca2fb4b8874ad23c2937e4ccc'}",
- "{name: 'local-storage-static-provisioner' , image: 'quay.io/openshift/okd-content@sha256:873e4138f9c01976cc6c95a9390d47b0ab235e743f00ae2f1fa95835af6f8663'}",
- "{name: 'machine-api-operator' , image: 'quay.io/openshift/okd-content@sha256:6dd0044bfeef4a83ba44a61005b07e7fcd8253a807879e87abf7b047f72ac828'}",
- "{name: 'machine-config-operator' , image: 'quay.io/openshift/okd-content@sha256:9e90d4ae5ce69de2cbde214871ae7c64ed49ae20ceca66ede0802cf7a792af8b'}",
- "{name: 'machine-os-content' , image: 'quay.io/openshift/okd-content@sha256:a5e6c4c1296d40b1bb737f729d43908e461587dbfef064a98b61b434a356ad99'}",
- "{name: 'mdns-publisher' , image: 'quay.io/openshift/okd-content@sha256:e9e19656c3606b99aec6563426f0fedb2d7405b48fe108d9a58b88168709b0a2'}",
- "{name: 'multus-admission-controller' , image: 'quay.io/openshift/okd-content@sha256:48fb3fae513be94f37d068506a2fb3553de055bd957524c3d5cd06c3ab63dc71'}",
- "{name: 'multus-cni' , image: 'quay.io/openshift/okd-content@sha256:79e4346edfd48b9310e8e65126520868b366504a130415daaa487437a17f2a2c'}",
- "{name: 'multus-route-override-cni' , image: 'quay.io/openshift/okd-content@sha256:881fb4028fec3fc027980e821307bfd7afbc0587a8d6e597a9243e60163c3569'}",
- "{name: 'multus-whereabouts-ipam-cni' , image: 'quay.io/openshift/okd-content@sha256:2cc79c246065854375c247757464f13e32e871901cdc36d95f6118db3cd62a5b'}",
- "{name: 'must-gather' , image: 'quay.io/openshift/okd-content@sha256:a273f5ac7f1ad8f7ffab45205ac36c8dff92d9107ef3ae429eeb135fa8057b8b'}",
- "{name: 'oauth-apiserver' , image: 'quay.io/openshift/okd-content@sha256:444be72589abd150e048f5008c819c3c4527bf4197bb93bbdeb2f012e80e495c'}",
- "{name: 'oauth-proxy' , image: 'quay.io/openshift/okd-content@sha256:0f7a4323b2f2ef2343cc44858bc8f88fdf5ad7a61a037d59072557e7afaed415'}",
- "{name: 'oauth-proxy-samples' , image: 'quay.io/openshift/okd-content@sha256:0656318cefa7961a1333b9de440fcc526ca76065f855a7a3082dc35d21be134f'}",
- "{name: 'oauth-server' , image: 'quay.io/openshift/okd-content@sha256:30381dcfddb506e6704cd19967c8774de30b701894788423c657f1d87f915b17'}",
- "{name: 'openshift-apiserver' , image: 'quay.io/openshift/okd-content@sha256:3008b05ae0a3a7b38b77b281e60bb972d5b6d80883b300addc5e966aeb83138a'}",
- "{name: 'openshift-controller-manager' , image: 'quay.io/openshift/okd-content@sha256:22b1cc34d5370882e4d527b53fbf828239047c6d3bff3544d500cec80d0681c4'}",
- "{name: 'openshift-state-metrics' , image: 'quay.io/openshift/okd-content@sha256:cab7b3add9e14e41c137081b1eb3ac0fc43b4df7682fff1442d8f7fbf2415477'}",
- "{name: 'openstack-machine-controllers' , image: 'quay.io/openshift/okd-content@sha256:de24dd488f60c7cbfac81b9587686d0fe3e4612178a4b8a4fb26a34c724b7eec'}",
- "{name: 'operator-lifecycle-manager' , image: 'quay.io/openshift/okd-content@sha256:2248d2606c161d0442c99adfc608e2443e015fc7fa33c5f7382446ecf68e21d5'}",
- "{name: 'operator-marketplace' , image: 'quay.io/openshift/okd-content@sha256:3480cec3290801b92136deea676bb350bf1cd480f1ca2c82f1cb5f5fa822d217'}",
- "{name: 'operator-registry' , image: 'quay.io/openshift/okd-content@sha256:14b75c4e4f7878f954f7f60233833f3356d99a51c5e08960b673da29d74f7751'}",
- "{name: 'ovirt-machine-controllers' , image: 'quay.io/openshift/okd-content@sha256:fda4fccbed0a5be00d0d04459a49b21714a0e8240037a9951096bd8dac421eb5'}",
- "{name: 'ovn-kubernetes' , image: 'quay.io/openshift/okd-content@sha256:e60d74ffe7b48fa38e91f22ecf5ff37f18b26493fb4dfb3500fcfe5afdd16599'}",
- "{name: 'pod' , image: 'quay.io/openshift/okd-content@sha256:6e848b9eb42cd4a009b3f02518b3699cbc12d5f84fa2737084c7b73df4f5f5af'}",
- "{name: 'prom-label-proxy' , image: 'quay.io/openshift/okd-content@sha256:8d83284334b9e4d5b25b380ff6b29c27caa1a0234cff00e8eddb32b45f25b63b'}",
- "{name: 'prometheus' , image: 'quay.io/openshift/okd-content@sha256:5af0373659974782379d90d9a174352dd8f85cb7327cc48ef36cae4e8ba5903f'}",
- "{name: 'prometheus-alertmanager' , image: 'quay.io/openshift/okd-content@sha256:25bed531ccb0ff16ce19b927265f03cb9b2d572caa224ef302002269e925d83c'}",
- "{name: 'prometheus-config-reloader' , image: 'quay.io/openshift/okd-content@sha256:deacbd618b3c037cc8c99a83db2c2a1053db517b0a0bfdfdeb309591559c3eea'}",
- "{name: 'prometheus-node-exporter' , image: 'quay.io/openshift/okd-content@sha256:c199e7353642ed1a4237416055a75b0e415034c7ec48bbc8ae8d12b72552f819'}",
- "{name: 'prometheus-operator' , image: 'quay.io/openshift/okd-content@sha256:ec28b9dc5ad9184d0d70b85e5bc618c809084b293cbc57c215bf845bf7147b2b'}",
- "{name: 'sdn' , image: 'quay.io/openshift/okd-content@sha256:42670e6c5bed601a38cd505e7c1b33c37bab0b55f0647b8e27113c1689cbe100'}",
- "{name: 'service-ca-operator' , image: 'quay.io/openshift/okd-content@sha256:363c11f87a66fba16a89225cfb09f09ee1f65ae2af2f7f3c23209ab60f7060b2'}",
- "{name: 'service-catalog' , image: 'quay.io/openshift/okd-content@sha256:24121dc11c9d253da0b1cf337b6d5ceeaa8ccd25bb3d7dd7341480360bb87551'}",
- "{name: 'telemeter' , image: 'quay.io/openshift/okd-content@sha256:6b30f9823d679c3554e6d1bf68e79702dd403ad1652383ab219205e29a4d3356'}",
- "{name: 'tests' , image: 'quay.io/openshift/okd-content@sha256:308f7ab2a14da09dcbc727eb5a2547ba037a9dfe72cd11a41dabd7d9271e0507'}",
- "{name: 'thanos' , image: 'quay.io/openshift/okd-content@sha256:156ee3923fa70e7bd3b7a173f0e7dc7d9fd50dcc0216b1fefc9ed324f34b07f8'}",
putain mes images sont préfixé par quay.io !!!!!
systemctl status podman-registry.service
000journalctl -p err
systemctl is-active --quiet podman-registry.service
container-registry.sandbox.okd.local
install telnet
rpm-ostree install telnet
journalctl -u crio.service
- liste les registres :)
podman --log-level=debug pull quay.io/openshift/okd-content@sha256:6e848b9eb42cd4a009b3f02518b3699cbc12d5f84fa2737084c7b73df4f5f5af
- "{name: 'openshift-apiserver' , image: 'quay.io/openshift/okd-content@sha256:3008b05ae0a3a7b38b77b281e60bb972d5b6d80883b300addc5e966aeb83138a'}", 4.4.0-0.okd-2020-03-28-092308-openshift-apiserver
podman --log-level=debug pull quay.io/openshift/okd-content:4.4.0-0.okd-2020-03-28-092308-openshift-apiserver
podman --log-level=debug pull quay.io/openshift/okd-content@sha256:3008b05ae0a3a7b38b77b281e60bb972d5b6d80883b300addc5e966aeb83138a
list repositories
curl http://localhost/v2/_catalog
lsof -i -P -n | grep LISTEN
netstat -tulpn | grep LISTEN
//////////////////////
- container-registry
https://docs.openshift.com/container-platform/4.2/openshift_images/image-configuration.html
curl http://container-registry.sandbox.okd.local:80/v2/openshift/okd-content/manifests/latest
- worker + control plane
vi /etc/containers/registries.conf
```
[[registry]]
prefix = "quay.io/openshift"
location = "quay.io/openshift"
mirror-by-digest-only = true
[[registry.mirror]]
location = "container-registry.sandbox.okd.local/openshift"
insecure = true
```
chmod u=rw,g=r,o=r /etc/containers/registries.conf
podman info --debug
podman --log-level=debug pull quay.io/openshift/okd-content@sha256:b9d78a6300ae7d414aa2e4cb3353416d1c12c28ca2fb4b8874ad23c2937e4ccc
---------------- jq installation
wget https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 -O jq
chmod +x jq
mv jq /usr/local/bin
sudo yum install python3
-----------------------
TODO: regenerer les certificats : les redeposser dans le registry, regenerer les fichier ignition et relancer l'installation de la bootsrap
curl -k https://container-registry.sandbox.okd.local:443/v2/_catalog
podman --log-level=debug pull quay.io/openshift/okd-content@sha256:b9d78a6300ae7d414aa2e4cb3353416d1c12c28ca2fb4b8874ad23c2937e4ccc
> alors l'image est bien téléchargé via dns
> putain cela speed à pleine balle !!!! youou
14:25
3 minutes pour télécharger au boot
puis cpu à fond pour recuperer les images depuis le registre
14:30 reboot !
curl -L -k https://localhost:22623/config/master
curl -L -k https://api-int.sandbox.okd.local:22623/config/master
curl -L -k https://api-int.sandbox.okd.local:22623/config/master > reponse_bootstrap_to_master.json
curl -L -k https://localhost:22623/config/master
curl -L -k https://localhost:22623/config/worker
journalctl -u machine-config-daemon-firstboot.service
dig container-registry.sandbox.okd.local
podman --log-level=debug pull quay.io/openshift/okd-content@sha256:9e90d4ae5ce69de2cbde214871ae7c64ed49ae20ceca66ede0802cf7a792af8b
PUTAIN potentiellement le dns n'est pas encore up pour lancer le service ...
donc je devrais tester au bout d'un moment si le service est up ...
SINON j'ajoute le DNS ... dans le master.json ...
[systemd]
Failed Units: 1
machine-config-daemon-firstboot.service
[root@control-plane-0 damien]# systemctl restart machine-config-daemon-firstboot.service
TODO : avant de creer le worker je dois m'assurer que la config est accessible ...
- bootstrap
curl -k -v https://localhost:22623/config/master
lsof -i -P -n | grep 22623
machine-c 4167 root 5u IPv6 40804 0t0 TCP *:22623 (LISTEN)
ps aux | grep 4167
root 4167 0.0 0.3 135200 29368 ? Ssl 19:41 0:00 /usr/bin/machine-config-server bootstrap
./status:105: ├─machine.slice
./status:121: │ └─4167 /usr/bin/machine-config-server bootstrap
machine-config-daemon
- master
machine-config-daemon
systemctl start machine.slice
TODO: essayer avec la beta ...
https://github.com/openshift/installer/blob/master/docs/user/troubleshooting.md#installer-fails-to-initialize-the-cluster
The status of the Machine API Operator can be checked by running the following command from the machine used to install the cluster:
[damien@localhost .okd]$ oc --kubeconfig=./auth/kubeconfig --namespace=openshift-machine-api get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
machine-api-operator 0/1 1 0 11m
[damien@localhost .okd]$ oc --kubeconfig=./auth/kubeconfig --namespace=openshift-machine-api get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
machine-api-operator 0/1 1 0 11m
[damien@localhost .okd]$ oc --kubeconfig=./auth/kubeconfig --namespace=openshift-machine-api get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
machine-api-operator 1/1 1 1 15m
[damien@localhost .okd]$ oc --kubeconfig=./auth/kubeconfig --namespace=openshift-machine-api get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
machine-api-operator 1/1 1 1 15m
oc --kubeconfig=./auth/kubeconfig --namespace=openshift-machine-api get deployments
oc --kubeconfig=./auth/kubeconfig --namespace=openshift-machine-api logs deployments/machine-api-controllers --container=machine-controller
oc --kubeconfig=./auth/kubeconfig get clusteroperator
case of dns failing
oc --kubeconfig=./auth/kubeconfig get clusteroperator dns -oyaml
oc --kubeconfig=./auth/kubeconfig get pods --all-namespaces
> commande super interessante pour voir tous les pods et leur état !!!
openshift-dns-operator
oc --kubeconfig=./auth/kubeconfig describe -n openshift-ingress pod/router-default-697dfdbdb9-pl6nz
oc --kubeconfig=./auth/kubeconfig describe -n openshift-dns-operator pod/dns-operator-6666c7b7f-9mstk
oc --kubeconfig=./auth/kubeconfig describe -n openshift-cluster-storage-operator
csi-snapshot-controller-operator-55f9cdd7f6-kb2dw
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 Insufficient cpu.
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 Insufficient cpu.
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 Insufficient cpu.
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 Insufficient cpu.
------------
oc --kubeconfig=./auth/kubeconfig describe -n openshift-ingress pod/router-default-697dfdbdb9-pl6nz
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
Warning FailedScheduling 19m (x14 over 21m) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
Warning FailedScheduling 61s (x21 over 11m) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
oc --kubeconfig=./auth/kubeconfig get clusteroperator ingress -oyaml
oc --kubeconfig=./auth/kubeconfig describe -n openshift-ingress pod/router-default-697dfdbdb9-pl6nz
oc --kubeconfig=./auth/kubeconfig logs -n openshift-ingress pod/router-default-697dfdbdb9-pl6nz
oc --kubeconfig=./auth/kubeconfig describe -n openshift-ingress pod/router-default-697dfdbdb9-pl6nz
Node-Selectors: kubernetes.io/os=linux
node-role.kubernetes.io/worker=
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
Warning FailedScheduling 29m (x14 over 31m) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
Warning FailedScheduling 36s (x28 over 21m) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
Le ingress ne fonctionna pas car il me manque les workers !!!
export KUBECONFIG=~/.okd/auth/kubeconfig
oc whoami
oc get nodes
oc get csr
[damien@localhost openshift-sandbox]$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-7gslm 105m system:node:control-plane-1.sandbox.okd.local Approved,Issued
csr-h8bz2 29m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
csr-kpztf 104m system:node:control-plane-2.sandbox.okd.local Approved,Issued
csr-v4ghm 106m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-wdsfg 105m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-xcl7h 105m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-ztx6q 106m system:node:control-plane-0.sandbox.okd.local Approved,Issued
[damien@localhost openshift-sandbox]$
wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
chmod +x jq
sudo mv jq /usr/local/bin/
jq --version
oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
oc get clusteroperator
oc get pods --all-namespaces
oc describe -n openshift-machine-config-operator pod/etcd-quorum-guard-69c6fcb454-2gx28
oc describe -n openshift-kube-apiserver pod/kube-apiserver-control-plane-0.sandbox.okd.local
oc describe -n openshift-machine-config-operator pod/etcd-quorum-guard-69c6fcb454-nj5rx
oc describe -n openshift-kube-apiserver pod/kube-apiserver-compute-0.sandbox.okd.local
oc describe -n openshift-etcd-operator pod/etcd-operator-5bb5599569-wz7nf
oc logs -n openshift-etcd-operator pod/etcd-operator-5bb5599569-wz7nf -c operator
oc get clusteroperator authentication -oyaml
= bootstrap
/system.slice/kubelet.service
watch systemctl status kubelet.service
watch systemctl status bootkube.service
> active puis fail
> fail in bootstrap ...
journalctl -b -f -u bootkube.service
systemctl edit --full bootkube.service
/opt/openshift
podman run --net=host --rm --volume /opt/openshift:/assets:z --volume /etc/kubernetes:/etc/kubernetes:z quay.io/openshift/okd-content@sha256:e80f6486d558776d8b331d7856f5ba3bbaff476764b395e82d279ce86c6bb11d start --tear-down-early=false --asset-dir=/assets --required-pods=openshift-kube-apiserver/kube-apiserver,openshift-kube-scheduler/openshift-kube-scheduler,openshift-kube-controller-manager/kube-controller-manager,openshift-cluster-version/cluster-version-operator
= control plane
systemctl start machine.slice
lsof -i -P -n | grep LISTEN | grep 22623
curl -k -L https://api-int.sandbox.okd.local:6443
[damien@localhost openshift-sandbox]$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-4kpvk 34m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-4mc6q 49m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-9pt8x 49m system:node:control-plane-0.sandbox.okd.local Approved,Issued
csr-d7lws 48m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-dcffz 10m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-g4kr5 10m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-hn2pt 33m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-n4clv 23s system:node:compute-2.sandbox.okd.local Pending
csr-nr6hp 12s system:node:compute-1.sandbox.okd.local Pending
csr-nwvvv 50m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-tnrt5 48m system:node:control-plane-2.sandbox.okd.local Approved,Issued
csr-vk897 28m system:node:compute-0.sandbox.okd.local Approved,Issued
csr-vs658 48m system:node:control-plane-1.sandbox.okd.local Approved,Issued
[damien@localhost openshift-sandbox]$ oc get clusteroperator
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication Unknown Unknown True 47m
cloud-credential 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 50m
cluster-autoscaler 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 40m
console 4.4.0-0.okd-2020-04-07-175212-beta2 Unknown True False 41m
dns 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 45m
etcd 4.4.0-0.okd-2020-04-07-175212-beta2 True True True 39m
image-registry 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 42m
ingress unknown False True True 41m
insights 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 42m
kube-apiserver 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 43m
kube-controller-manager 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 43m
kube-scheduler 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 43m
kube-storage-version-migrator 4.4.0-0.okd-2020-04-07-175212-beta2 False False False 47m
machine-api 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 46m
machine-config 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 44m
marketplace 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 41m
monitoring False True True 36m
network 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 44m
node-tuning 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 47m
openshift-apiserver 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 39m
openshift-controller-manager 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 42m
openshift-samples 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 39m
operator-lifecycle-manager 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 46m
operator-lifecycle-manager-catalog 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 46m
operator-lifecycle-manager-packageserver 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 13m
service-ca 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 47m
service-catalog-apiserver 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 47m
service-catalog-controller-manager 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 47m
storage 4.4.0-0.okd-2020-04-07-175212-beta2 True False False 42m
[damien@localhost openshift-sandbox]$ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
certificatesigningrequest.certificates.k8s.io/csr-n4clv approved
certificatesigningrequest.certificates.k8s.io/csr-nr6hp approved
[damien@localhost openshift-sandbox]$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-4kpvk 34m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-4mc6q 49m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-9pt8x 50m system:node:control-plane-0.sandbox.okd.local Approved,Issued
csr-d7lws 48m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-dcffz 10m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-g4kr5 10m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-hn2pt 33m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-n4clv 29s system:node:compute-2.sandbox.okd.local Approved,Issued
csr-nr6hp 18s system:node:compute-1.sandbox.okd.local Approved,Issued
csr-nwvvv 50m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-tnrt5 48m system:node:control-plane-2.sandbox.okd.local Approved,Issued
csr-vk897 28m system:node:compute-0.sandbox.okd.local Approved,Issued
csr-vs658 49m system:node:control-plane-1.sandbox.okd.local Approved,Issued
[damien@localhost openshift-sandbox]$ oc get pods --all-namespaces | grep Pending
openshift-cluster-storage-operator csi-snapshot-controller-operator-55f9cdd7f6-fzj2f 0/1 Pending 0 77m
openshift-ingress router-default-697dfdbdb9-wnw65 0/1 Pending 0 68m
openshift-ingress router-default-78d6756977-lkljw 0/1 Pending 0 68m
openshift-kube-storage-version-migrator migrator-95785f5b5-2dcj6 0/1 Pending 0 74m
openshift-marketplace community-operators-795d8c78b5-cvrnz 0/1 Pending 0 7m54s
openshift-marketplace community-operators-c59985979-phts9 0/1 Pending 0 68m
openshift-monitoring kube-state-metrics-777885559b-s9ccd 0/3 Pending 0 68m
openshift-monitoring openshift-state-metrics-6b5f5d4f6-xm6tm 0/3 Pending 0 68m
openshift-monitoring prometheus-adapter-7d88cf8d9d-4d9l7 0/1 Pending 0 63m
openshift-monitoring prometheus-adapter-7d88cf8d9d-52cgr 0/1 Pending 0 63m
openshift-monitoring telemeter-client-65ccb68d56-s4nb6 0/3 Pending 0 63m
openshift-monitoring telemeter-client-6694c64b66-dfrr8 0/3 Pending 0 68m
oc describe -n openshift-ingress pod/router-default-697dfdbdb9-wnw65
oc describe -n openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-55f9cdd7f6-fzj2f
oc describe -n openshift-monitoring pod/prometheus-adapter-7d88cf8d9d-4d9l7
oc get deployment --all-namespaces
oc get deployment -n openshift-ingress router-default -o yaml
oc get rs --all-namespaces
oc get rs -n openshift-ingress router-default-78d6756977 -oyaml
oc get deployment -n openshift-ingress router-default
oc rollout status -n openshift-ingress deployments/router-default
> error: deployment "router-default" exceeded its progress deadline
oc rollout retry -n openshift-ingress deployments/router-default
oc describe -n openshift-ingress pod/router-default-78d6756977-lkljw
oc logs -n openshift-ingress pod/router-default-78d6756977-lkljw -c router
> vide
oc scale --replicas=0 deployment/router-default -n openshift-ingress
oc scale --replicas=2 deployment/router-default -n openshift-ingress
/!\ je me suis fait killer un controle plane ... pas assez de resources !!! bye bye okd ...