-
Notifications
You must be signed in to change notification settings - Fork 68
/
Copy pathbbr-backup.html.md.erb
748 lines (561 loc) · 37.2 KB
/
bbr-backup.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
---
title: Backing Up Tanzu Kubernetes Grid Integrated Edition
owner: TKGI
---
This topic describes how to use BOSH Backup and Restore (BBR) to back up the VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) Control Plane and its cluster deployments.
##<a id="overview"></a> Overview
The BOSH Director, Tanzu Kubernetes Grid Integrated Edition Control Plane, and cluster deployments include custom back up and restore scripts which encapsulate the correct procedure
for backing up and restoring the Director and Control Plane.
BBR orchestrates running the back up and restore scripts and transferring the generated backup artifacts to and from a backup directory.
If configured correctly, BBR can use TLS to communicate securely with backup targets.
* To perform a restore of the BOSH Director, see [Restore the BOSH Director](bbr-restore.html#redeploy-restore-director).
* To perform a restore of the TKGI Control Plane, see [Restore the Tanzu Kubernetes Grid Integrated Edition Control Plane](bbr-restore.html#redeploy-restore-control-plane).
* To perform a restore of a cluster deployment, see [Restore Tanzu Kubernetes Grid Integrated Edition Clusters](bbr-restore.html#redeploy-restore-clusters).
To view the BBR release notes, see the Cloud Foundry documentation, [BOSH Backup and Restore Release Notes](https://docs.cloudfoundry.org/bbr/bbr-rn.html).
## <a id='recs'></a> Recommendations
<%= vars.recommended_by %> recommends:
* Follow the full procedure documented in this topic when creating a backup. This ensures that you always have a consistent backup of Ops Manager and Tanzu Kubernetes Grid Integrated Edition to restore from.
* Back up frequently, especially before upgrading your Tanzu Kubernetes Grid Integrated Edition deployment.
* For BOSH v270.0 and above (currently in <%= vars.platform_name %> 2.7), prune the BOSH blobstore by running `bosh clean-up --all` prior to running a backup of the BOSH director. This removes all unused resources, including packages compiled against older stemcell versions, which can result in a smaller, faster backup of the BOSH Director. For more information see the [`clean-Up`](https://bosh.io/docs/cli-v2/#clean-up) command.
<p class="note"><strong>Note:</strong>The command <code>bosh clean-up --all</code> is a destructive operation and can remove resources that are unused but needed. For example, if an On-Demand Service Broker such as Tanzu Kubernetes Grid Integrated Edition is deployed <strong>and</strong> no service instances have been created, the releases needed to create a service instance will be categorized as unused and removed.</p>
## <a id='supported'></a> Supported Components
This section describes the components that are supported and not supported by BBR.
<%= partial 'bbr-supported-components' %>
<%= partial 'bbr-unsupported-components' %>
## <a id="prepare"></a> Prepare to Back Up
<%= partial 'preparing-for-bbr' %>
## <a id='backup'></a> Back Up Tanzu Kubernetes Grid Integrated Edition
To back up your Tanzu Kubernetes Grid Integrated Edition environment you must first connect to your jump box before executing `bbr` backup commands.
### <a id='connect-to-jumpbox'></a> Connect to Your Jump Box
You can establish a connection to your jump box in one of the following ways:
* [Connect with SSH](#ssh)
* [Connect with BOSH_ALL_PROXY](#bosh-all-proxy)
For general information about the jump box, see [Installing BOSH Backup and Restore](bbr-install.html).
#### <a id='ssh'></a> Connect with SSH
To connect to your jump box with SSH, do one of the following:
+ **If you are using the Ops Manager VM as your jump box, log in to the Ops Manager VM.** See
[Log in to the Ops Manager VM with SSH](https://techdocs.broadcom.com/us/en/vmware-tanzu/platform/tanzu-operations-manager/3-0/tanzu-ops-manager/install-trouble-advanced.html#ssh) in _Advanced Troubleshooting with the BOSH CLI_.
<br><br>
+ **If you want to connect to your jump box using the command line, run the following
command:**
```
ssh -i PATH-TO-KEY JUMP-BOX-USERNAME@JUMP-BOX-ADDRESS
```
Where:
* `PATH-TO-KEY` is the local path to your private key for the jump box host.
* `JUMP-BOX-USERNAME` is your jump box user name.
* `JUMP-BOX-ADDRESS` is the address of the jump box.
<p class="note"><strong>Note:</strong> If you connect to your jump box with SSH, you must run the BBR commands
in the following sections from within your jump box.</p>
#### <a id='bosh-all-proxy'></a> Connect with BOSH_ALL_PROXY
You can use the `BOSH_ALL_PROXY` environment variable to open an SSH tunnel with SOCKS5 to your jump box.
This tunnel enables you to forward requests from your local machine to the BOSH Director through the jump box.
When `BOSH_ALL_PROXY` is set, BBR always uses its value to forward requests to the BOSH Director.
<p class="note"><strong>Note:</strong>
For the following procedures to work, ensure the SOCKS port is not already in use by a different tunnel or process.</p>
To connect with `BOSH_ALL_PROXY`, do one of the following:
* **If you want to establish the tunnel separate from the BOSH CLI, do the following:**
1. Establish the tunnel and make it available on a local port by running the following command:
```
ssh -4 -D SOCKS-PORT -fNC JUMP-BOX-USERNAME@JUMP-BOX-ADDRESS -i JUMP-BOX-KEY-FILE -o ServerAliveInterval=60
```
Where:
* `SOCKS-PORT` is the local SOCKS port.
* `JUMP-BOX-USERNAME` is your jump box user name.
* `JUMP-BOX-ADDRESS` is the address of the jump box.
* `JUMP-BOX-KEY-FILE` is the local SSH private key for accessing the jump box.
For example:
```console
$ ssh -4 -D 12345 -fNC [email protected] -i jumpbox.key -o ServerAliveInterval=60
```
1. Provide the BOSH CLI with access to the tunnel through `BOSH_ALL_PROXY` by running the following command:
```
export BOSH_ALL_PROXY=socks5://localhost:SOCKS-PORT
```
Where is `SOCKS-PORT` is your local SOCKS port.
* **If you want to establish the tunnel using the BOSH CLI, do the following:**
1. Provide the BOSH CLI with the necessary SSH credentials to create the
tunnel by running the following command:
```
export BOSH_ALL_PROXY=ssh+socks5://JUMP-BOX-USERNAME@JUMP-BOX-ADDRESS:SOCKS-PORT?private_key=JUMP-BOX-KEY-FILE
```
Where:
* `JUMP-BOX-USERNAME` is your jump box user name.
* `JUMP-BOX-ADDRESS` is the address of the jump box.
* `SOCKS-PORT` is your local SOCKS port.
* `JUMP-BOX-KEY-FILE` is the local SSH private key for accessing the jump box.
For example:
```console
$ export BOSH_ALL_PROXY=ssh+socks5://[email protected]:12345?private_key=jumpbox.key
```
<p class="note"><strong>Note:</strong> Using <code>BOSH_ALL_PROXY</code> can result in longer
back up and restore times because of network performance degradation. All operations must pass
through the proxy which means moving backup artifacts can be significantly slower.</p>
<div class="note warning"><strong>Warning:</strong> In BBR v1.5.0 and earlier,
the tunnel created by the BOSH CLI does not include the <code>ServerAliveInterval</code> flag.
This might result in your SSH connection timing out when transferring large artifacts.
In BBR v1.5.1, the <code>ServerAliveInterval</code> flag is included.
For more information,
see <a href="https://github.com/cloudfoundry-incubator/bosh-backup-and-restore/releases/tag/v1.5.1">bosh-backup-and-restore v1.5.1</a> on GitHub.
</div>
### <a id='export-opsman-settings'></a> Back Up Installation Settings
To ensure your BBR backup is reliable, frequently export your Ops Manager installation settings as a backup.
There are two ways to export Ops Manager installation settings:
* [Export settings using the Ops Manager UI](#export-via-ui)
* [Export settings using the Ops Manager API](#export-via-api)
<p class="note"><strong>Note</strong>: If you want to automate the back up process,
you can use the Ops Manager API to export your installation settings.</p>
When exporting your installation settings, keep in mind the following:
* Always export your installation settings before following the steps in the
[Restore the BOSH Director](bbr-restore.html#redeploy-restore-director)
section of the *Restoring Tanzu Kubernetes Grid Integrated Edition* topic.
* You can only export Ops Manager installation settings after you have deployed at least once.
* Your Ops Manager settings export is only a backup of Ops Manager configuration settings.
The export is not a backup of your VMs or any external MySQL databases.
* Your Ops Manager settings export is encrypted. Make sure you keep track of your Decryption Passphrase
because this is needed to restore the Ops Manager settings.
#### <a id='export-via-ui'></a> Export Settings Using the Ops Manager UI
To export your Ops Manager installation settings using the Ops Manager UI, perform the following steps:
1. From the **Installation Dashboard** in the Ops Manager interface, click your user name at the top right navigation.
1. Select **Settings**.
1. Select **Export Installation Settings**.
1. Click **Export Installation Settings**.
#### <a id='export-via-api'></a> Export Settings Using the Ops Manager API
To export your Ops Manager installation settings using the Ops Manager API, perform the following steps:
1. To export your installation settings using the Ops Manager API, run the following command:
```
curl https://OPS-MAN-FQDN/api/v0/installation_asset_collection \
-H "Authorization: Bearer UAA-ACCESS-TOKEN" > installation.zip
```
Where:
* `OPS-MAN-FQDN` is the fully-qualified domain name (FQDN) for your Ops Manager deployment.
* `UAA-ACCESS-TOKEN` is your UAA access token. For more information, see Access the API.
### <a id='back-up-director'></a> Back Up the Tanzu Kubernetes Grid Integrated Edition BOSH Director
To back up BOSH Director you will validate your current configuration, then execute the `bbr` backup command.
#### <a id='back-up-director-validate'></a> Validate the Tanzu Kubernetes Grid Integrated Edition BOSH Director
1. To confirm that your BOSH Director is reachable and has the correct BBR scripts, run the following command:
```
bbr director --host BOSH-DIRECTOR-IP --username bbr \
--private-key-path PRIVATE-KEY-FILE pre-backup-check
```
Where:
* `BOSH-DIRECTOR-IP` is the address of the BOSH Director. If the BOSH Director is public, `BOSH-DIRECTOR-IP` is a URL, such as
`https://my-bosh.xxx.cf-app.com`. Otherwise, this is the internal IP `BOSH-DIRECTOR-IP` which you can retrieve as show in
[Retrieve the BOSH Director Address](#bosh-address).
* `PRIVATE-KEY-FILE` is the path to the private key file that you can create from `Bbr Ssh Credentials` as shown in
[Download the BBR SSH Credentials](#bbr-ssh-creds).
For example:
```console
$ bbr director --host 10.0.0.5 --username bbr \
--private-key-path private-key.pem pre-backup-check
```
1. If the pre-backup check command fails, perform the following actions:
1. Run the command again, adding the `--debug` flag to enable debug logs. For more information,
see [BBR Logging](bbr-logging.html).
1. Make any correction suggested in the output and run the pre-backup check again.
#### <a id='back-up-director-back-up'></a> Back Up the Tanzu Kubernetes Grid Integrated Edition BOSH Director
1. If the pre-backup check succeeds, run the BBR back up command from your jump box to back up the
TKGI BOSH Director:
```
bbr director --host BOSH-DIRECTOR-IP --username bbr \
--private-key-path PRIVATE-KEY-FILE backup
```
Where:
* `BOSH-DIRECTOR-IP` is the address of the BOSH Director. If the BOSH Director is public, `BOSH-DIRECTOR-IP`
is a URL, such as `https://my-bosh.xxx.cf-app.com`. Otherwise, this is the internal IP.
See [Retrieve the BOSH Director Address](#bosh-address) for more information.
* `PRIVATE-KEY-FILE` is the path to the private key file that you can create from `Bbr Ssh Credentials` as shown in
[Download the BBR SSH Credentials](#bbr-ssh-creds).
For example:
```console
$ bbr director --host 10.0.0.5 --username bbr \
--private-key-path private-key.pem backup
```
<p class="note"><strong>Note</strong>: The BBR back up command can take a long time to complete.
You can run it independently of the SSH session so that the process can continue running even
if your connection to the jump box fails. The command above uses <code>nohup</code>, but you can
run the command in a <code>screen</code> or <code>tmux</code> session instead.</p>
1. If the command completes successfully, follow the steps in [Manage Your Backup Artifact](#good-practices) below.
1. If the backup command fails, perform the following actions:
* Run the command again, adding the `--debug` flag to enable debug logs. For more information,
see [BBR Logging](bbr-logging.html).
* Follow the steps in [Recover from a Failing Command](#recover-from-failing-command).
### <a id='back-up-control-plane'></a> Back Up the Tanzu Kubernetes Grid Integrated Edition Control Plane
To back up your Tanzu Kubernetes Grid Integrated Edition Control Plane you will validate the Control Plane, then execute the `bbr` back up command.
#### <a id='locate-deploy-name'></a> Locate the Tanzu Kubernetes Grid Integrated Edition Deployment Name
Locate and record your Tanzu Kubernetes Grid Integrated Edition BOSH deployment name as follows:
1. Open an SSH connection to either your jump box, as described in the previous section, or the Ops Manager VM.
For instructions on how to SSH into the Ops Manager VM, see
[Log in to the Ops Manager VM with SSH](https://techdocs.broadcom.com/us/en/vmware-tanzu/platform/tanzu-operations-manager/3-0/tanzu-ops-manager/install-trouble-advanced.html#ssh)
in _Advanced Troubleshooting with the BOSH CLI_.
1. On the command line, run the following command to retrieve your Tanzu Kubernetes Grid Integrated Edition BOSH deployment name.
```
BOSH-CLI-CREDENTIALS deployments | grep pivotal-container-service
```
Where `BOSH-CLI-CREDENTIALS` is the full value that you copied from the BOSH Director tile in
[Download the BOSH Commandline Credentials](#bosh-cli-creds).
<br><br>
For example:
```console
$ BOSH_CLIENT=ops_manager BOSH_CLIENT_SECRET=p455w0rd BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate BOSH_ENVIRONMENT=10.0.0.5 bosh deployments | grep pivotal-container-service
pivotal-container-service-51f08f6402aaa960f041 backup-and-restore-sdk/1.9.0 bosh-google-kvm-ubuntu-jammy-go_agent/1.75
service-instance_4ffeb5b5-5182-4faa-9d92-696d97cc9ae1 bosh-dns/1.10.0 bosh-google-kvm-ubuntu-jammy-go_agent/1.75
pivotal-container-service-51f08f6402aaa960f041
```
1. Review the returned output. The Tanzu Kubernetes Grid Integrated Edition BOSH deployment name begins with
`pivotal-container-service` and includes a unique identifier.
In the example output above, the BOSH deployment name is `pivotal-container-service-51f08f6402aaa960f041`.
#### <a id='back-up-control-plane-check'></a> Validate the Tanzu Kubernetes Grid Integrated Edition Control Plane
1. To confirm that your TKGI control plane is reachable and has a deployment that can be backed up, run the BBR pre-backup check command:
```
BOSH_CLIENT_SECRET=BOSH-CLIENT-SECRET bbr deployment \
--target BOSH-TARGET --username BOSH-CLIENT --deployment DEPLOYMENT-NAME \
--ca-cert PATH-TO-BOSH-SERVER-CERT \
pre-backup-check
```
Where:
* `BOSH-CLIENT-SECRET` is your BOSH client secret. If you do not know your BOSH Client Secret, open your BOSH Director tile,
navigate to **Credentials > Bosh Commandline Credentials** and record the value for `BOSH_CLIENT_SECRET`.
* `BOSH-TARGET` is your BOSH Environment setting. If you do not know your BOSH Environment setting, open your BOSH Director tile,
navigate to **Credentials > Bosh Commandline Credentials** and record the value for `BOSH_ENVIRONMENT`. You must be able to
reach the target address from the workstation where you run `bbr` commands.
* `BOSH-CLIENT` is your BOSH Client Name. If you do not know your BOSH Client Name, open your BOSH Director tile,
navigate to **Credentials > Bosh Commandline Credentials** and record the value for `BOSH_CLIENT`.
* `DEPLOYMENT-NAME` is the Tanzu Kubernetes Grid Integrated Edition BOSH deployment name that you located in
the [Locate the Tanzu Kubernetes Grid Integrated Edition Deployment Name](#locate-deploy-name) section above.
* `PATH-TO-BOSH-CA-CERT` is the path to the root CA certificate that you downloaded in [Download the Root CA Certificate](#root-ca-cert) above.
For example:
```console
$ BOSH_CLIENT_SECRET=p455w0rd bbr deployment \
--target bosh.example.com --username admin --deployment cf-acceptance-0 \
--ca-cert bosh.ca.cert \
pre-backup-check
```
1. If the pre-backup check command fails, perform the following actions:
1. Run the command again, adding the `--debug` flag to enable debug logs. For more information,
see [BBR Logging](bbr-logging.html).
1. Make any correction suggested in the output and run the pre-backup check again. For example,
the deployment that you selected might not have the correct back up scripts, or the connection
to the BOSH Director failed.
#### <a id='back-up-control-plane-backup'></a> Back Up the Tanzu Kubernetes Grid Integrated Edition Control Plane
If the pre-backup check succeeds, run the BBR backup command.
1. To back up the TKGI control plane, run the following BBR backup command from your jump box:
```
BOSH_CLIENT_SECRET=BOSH-CLIENT-SECRET nohup bbr deployment \
--target BOSH-TARGET --username BOSH-CLIENT --deployment DEPLOYMENT-NAME \
--ca-cert PATH-TO-BOSH-SERVER-CERT \
backup --with-manifest [--artifact-path]
```
Where:
* `BOSH-CLIENT-SECRET` is your BOSH client secret. If you do not know your BOSH Client Secret, open your BOSH Director tile,
navigate to **Credentials > Bosh Commandline Credentials** and record the value for `BOSH_CLIENT_SECRET`.
* `BOSH-TARGET` is your BOSH Environment setting. If you do not know your BOSH Environment setting, open your BOSH Director tile,
navigate to **Credentials > Bosh Commandline Credentials** and record the value for `BOSH_ENVIRONMENT`. You must be able to
reach the target address from the workstation where you run <code>bbr</code> commands.
* `BOSH-CLIENT` is your BOSH Client Name. If you do not know your BOSH Client Name, open your BOSH Director tile,
navigate to **Credentials > Bosh Commandline Credentials** and record the value for `BOSH_CLIENT`.
* `DEPLOYMENT-NAME` is the Tanzu Kubernetes Grid Integrated Edition BOSH deployment name that you located in
the [Locate the Tanzu Kubernetes Grid Integrated Edition Deployment Name](#locate-deploy-name) section above.
* `PATH-TO-BOSH-CA-CERT` is the path to the root CA certificate that you downloaded in [Download the Root CA Certificate](#root-ca-cert) above.
* `--with-manifest` is necessary in order to redeploy your TKGI Control Plane in the case of its loss.
`--with-manifest` is an optional `backup` parameter to include the manifest in the backup artifact.
* `--artifact-path` is an optional `backup` parameter to specify the output path for the backup artifact.</td>
<p class="note"><strong>Note</strong>: The <code>--with-manifest</code> flag is necessary in order to redeploy your TKGI Control Plane in the case of its loss.
Secure the backup artifact created by this process because it contains secret credentials.</p>
For example:
```console
$ BOSH_CLIENT_SECRET=p455w0rd nohup bbr deployment \
--target bosh.example.com --username admin --deployment cf-acceptance-0 \
--ca-cert bosh.ca.cert \
backup --with-manifest
```
<p class="note"><strong>Note</strong>: The BBR backup command can take a long time to complete.
You can run it independently of the SSH session so that the process can continue running even
if your connection to the jump box fails. The command above uses <code>nohup</code>, but you can
run the command in a <code>screen</code> or <code>tmux</code> session instead.</p>
1. If the command completes successfully, follow the steps in [Manage Your Backup Artifact](#good-practices) below.
1. If the backup command fails, perform the following actions:
1. Run the command again, adding the `--debug` flag to enable debug logs. For more information,
see [BBR Logging](bbr-logging.html).
1. Follow the steps in [Recover from a Failing Command](#recover-from-failing-command).
### <a id='back-up-clusters'></a> Back Up Cluster Deployments
Before backing up your TKGI cluster deployments, verify that they can be backed up.
#### <a id='verify-deployments'></a> Verify Your Cluster Deployments
To verify that you can reach your TKGI cluster deployments and that the deployments can be backed up, follow the steps below.
1. SSH into your jump box. For more information about the jump box, see
[Configure Your Jump Box](bbr-install.html#jumpbox-setup) in _Installing BOSH Backup and Restore_.
1. To perform the BBR pre-backup check, run the following command from your jump box:
```
BOSH_CLIENT_SECRET=TKGI-UAA-CLIENT-SECRET bbr deployment \
--all-deployments --target BOSH-TARGET --username TKGI-UAA-CLIENT-NAME \
--ca-cert PATH-TO-BOSH-SERVER-CERT \
pre-backup-check
```
Where:
* `TKGI-UAA-CLIENT-SECRET` is the value you recorded for `uaa_client_secret` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `BOSH-TARGET` is the value you recorded for the BOSH Director's address in
[Retrieve the BOSH Director Address](#bosh-address) above.
You must be able to reach the target address from the machine where you run `bbr` commands.
* `TKGI-UAA-CLIENT-NAME` is the value you recorded for `uaa_client_name` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `PATH-TO-BOSH-SERVER-CERT` is the path to the root CA certificate that you downloaded in
[Download or Locate Root CA Certificate](#root-ca-cert) above.
For example:
```console
$ BOSH_CLIENT_SECRET=p455w0rd bbr deployment \
--all-deployments --target bosh.example.com --username pivotal-container-service-12345abcdefghijklmn \
--ca-cert /var/tempest/workspaces/default/root_ca_certificate \
pre-backup-check
```
1. If the pre-backup-check command is successful, the command returns a list of cluster
deployments that can be backed up.
<br>
For example:
```console
[21:51:23] Pending: service-instance_abcdeg-1234-5678-hijk-90101112131415
[21:51:23] -------------------------
[21:51:31] Deployment 'service-instance_abcdeg-1234-5678-hijk-90101112131415' can be backed up.
[21:51:31] -------------------------
[21:51:31] Successfully can be backed up: service-instance_abcdeg-1234-5678-hijk-90101112131415
```
In the output above, `service-instance_abcdeg-1234-5678-hijk-90101112131415` is the
BOSH deployment name of a TKGI cluster.
1. If the pre-backup-check command fails, do one or more of the following:
* Make sure you are using the correct Tanzu Kubernetes Grid Integrated Edition credentials.
* Run the command again, adding the `--debug` flag to enable debug logs. For more information,
see [BBR Logging](bbr-logging.html).
* Make the changes suggested in the output and run the pre-backup check again. For example,
the deployments might not have the correct back up scripts, or the connection to
the BOSH Director failed.
#### <a id='back-up-clusters-back-up'></a> Back Up Cluster Deployments
When backing up your TKGI cluster, you can choose to back up only one cluster or to back up all cluster deployments in scope.
The procedures to do this are the following:
* [Back up All Cluster Deployments](#back-up-all)
* [Back Up One Cluster Deployment](#back-up-one)
##### <a id='back-up-all'></a> Back Up All Cluster Deployments
The following procedure backs up all cluster deployments.
Make sure you use the TKGI UAA credentials that you recorded in
[Download the UAA Client Credentials](#cluster-creds).
These credentials limit the scope of the back up to cluster deployments only.
<p class="note"><strong>Note</strong>: The BBR backup command can take a long time to complete.
You can run it independently of the SSH session so that the process can continue running even if
your connection to the jump box fails.
The command above uses <code>nohup</code>, but you could also run the command in a
<code>screen</code> or <code>tmux</code> session.</p>
1. To back up all cluster deployments, run the following command from your jump box:
```
BOSH_CLIENT_SECRET=TKGI-UAA-CLIENT-SECRET nohup bbr deployment \
--all-deployments --target BOSH-TARGET --username TKGI-UAA-CLIENT-NAME \
--ca-cert PATH-TO-BOSH-SERVER-CERT \
backup [--with-manifest] [--artifact-path]
```
Where:
* `TKGI-UAA-CLIENT-SECRET` is the value you recorded for `uaa_client_secret` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `BOSH-TARGET` is the value you recorded for the BOSH Director's address in
[Retrieve the BOSH Director Address](#bosh-address) above.
You must be able to reach the target address from the machine where you run `bbr` commands.
* `TKGI-UAA-CLIENT-NAME` is the value you recorded for `uaa_client_name` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `PATH-TO-BOSH-SERVER-CERT` is the path to the root CA certificate that you downloaded in
[Download the Root CA Certificate](#root-ca-cert) above.
* `--with-manifest` is an optional `backup` parameter to include the manifest in the backup artifact.
If you use this flag, secure the backup artifact because it contains secret credentials.
* `--artifact-path` is an optional `backup` parameter to specify the output path for the backup artifact.
For example:
```console
$ BOSH_CLIENT_SECRET=p455w0rd \
nohup bbr deployment \
--all-deployments \
--target bosh.example.com \
--username pivotal-container-service-12345abcdefghijklmn \
--ca-cert /var/tempest/workspaces/default/root_ca_certificate \
backup
```
<p class="note"><strong>Note</strong>: The optional <code>--with-manifest</code> flag directs BBR to create a backup
containing credentials. Manage the generated backup artifact knowing it contains secrets for administering
your environment.</p>
1. If the `backup` command completes successfully, follow the steps in [Manage Your Backup Artifact](#good-practices) below.
1. If the `backup` command fails, the back up operation exits. BBR does not attempt to continue backing up any
non-backed up clusters. To troubleshoot a failing back up, do one or more of the following:
* Run the command again, adding the `--debug` flag to enable debug logs. For more information,
see [BBR Logging](bbr-logging.html).
* Follow the steps in [Recover from a Failing Command](#recover-from-failing-command) below.
##### <a id='back-up-one'></a> Back Up One Cluster Deployment
1. To back up a single, specific cluster deployment, run the following command from your jump box:
```
BOSH_CLIENT_SECRET=TKGI-UAA-CLIENT-SECRET \
nohup bbr deployment \
--deployment CLUSTER-DEPLOYMENT-NAME \
--target BOSH-DIRECTOR-IP \
--username TKGI-UAA-CLIENT-NAME \
--ca-cert PATH-TO-BOSH-SERVER-CERT \
backup [--with-manifest] [--artifact-path]
```
Where:
* `TKGI-UAA-CLIENT-SECRET` is the value you recorded for `uaa_client_secret` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `CLUSTER-DEPLOYMENT-NAME` is the value you recorded in
[Retrieve your Cluster Deployment Name](#cluster-deployment-name) above.
* `BOSH-TARGET` is the value you recorded for the BOSH Director's address in
[Retrieve the BOSH Director Address](#bosh-address) above. You must be able to reach the
target address from the machine where you run `bbr` commands.
* `TKGI-UAA-CLIENT-NAME` is the value you recorded for `uaa_client_name` in
[Download the UAA Client Credentials](#cluster-creds) above.
* `PATH-TO-BOSH-SERVER-CERT` is the path to the root CA certificate that you downloaded in
[Download the Root CA Certificate](#root-ca-cert) above.
* `--with-manifest` is an optional `backup` parameter to include the manifest in the backup artifact.
If you use this flag, secure the backup artifact because it contains secret credentials.
* `--artifact-path` is an optional `backup` parameter to specify the output path for the backup artifact.
For example:
```console
$ BOSH_CLIENT_SECRET=p455w0rd nohup bbr deployment \
--deployment service-instance_abcdeg-1234-5678-hijk-9010111213141 \
--target bosh.example.com --username pivotal-container-service-12345abcdefghijklmn \
--ca-cert /var/tempest/workspaces/default/root_ca_certificate \
backup
```
<p class="note"><strong>Note</strong>: The optional <code>--with-manifest</code> flag directs BBR to create a backup
containing credentials. Manage the generated backup artifact knowing it contains secrets for administering
your environment.</p>
1. If the `backup` command completes successfully, follow the steps in [Manage Your Backup Artifact](#good-practices) below.
1. If the `backup` command fails, do one or more of the following:
* Run the command again, adding the `--debug` flag to enable debug logs. For more information,
see [BBR Logging](bbr-logging.html).
* Follow the steps in [Recover from a Failing Command](#recover-from-failing-command) below.
### <a id='cancel-backup'></a> Cancel a Back Up
Backing up can take a long time. If you realize that the back up is going to fail or that
your developers need to push an app immediately, you might need to cancel the back up.
To cancel a back up, perform the following steps:
1. Terminate the BBR process by pressing Ctrl-C and typing `yes` to confirm.
1. Because stopping a back up can leave the system in an unusable state and prevent additional
back ups, follow the procedures in [Clean up After a Failed Back Up](#manual-clean) below.
## <a id="backup-vsphere"></a> Back Up vCenter, and NSX if Used (vSphere Only)
If you are running Tanzu Kubernetes Grid Integrated Edition on vSphere with or without NSX
networking, you must back up your vCenter in addition to completing the BBR
procedures above.
For Tanzu Kubernetes Grid Integrated Edition deployments with NSX networking, you must also
back up the NSX Manager.
To complete the back up of your Tanzu Kubernetes Grid Integrated Edition environment running
on vSphere:
1. Back up vCenter. See
[Overview of Backup and Restore options in vCenter Server 6.x (2149237)](https://knowledge.broadcom.com/external/article?legacyId=2149237)
in the VMware documentation.
1. If you use NSX networking, back up the NSX Manager. See
[Backing Up and Restoring the NSX Manager](https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.admin.doc/GUID-A0B3667C-FB7D-413F-816D-019BFAD81AC5.html)
in the VMware documentation.
## <a id="after-backup"></a> After Backing Up Tanzu Kubernetes Grid Integrated Edition
After the back up has completed, review and manage the generated backup artifacts.
### <a id="good-practices"></a> Manage Your Backup Artifact
The BBR-created backup consists of a directory containing the backup artifacts and metadata files.
BBR stores each completed backup directory within the current working directory.
<p class="note"><strong>Note</strong>: The optional <code>--with-manifest</code> flag directs BBR to create a backup
containing credentials. Manage the generated backup artifact knowing it contains secrets for administering
your environment.</p>
BBR backup artifact directories are named using the following formats:
* `DIRECTOR-IP-TIMESTAMP` for the BOSH Director backups.
* `DEPLOYMENT-TIMESTAMP` for the Control Plane backup.
* `DEPLOYMENT-TIMESTAMP` for the cluster deployment backups.
Keep your backup artifacts safe by following these steps:
1. Move the backup artifacts off the jump box to your storage space.
1. Compress and encrypt the backup artifacts when storing them.
1. Make redundant copies of your backup and store them in multiple locations. This minimizes the
risk of losing your backups in the event of a disaster.
1. Each time you redeploy Tanzu Kubernetes Grid Integrated Edition, test your backup artifact by following the procedures in:
* [Restore the Tanzu Kubernetes Grid Integrated Edition BOSH Director](bbr-restore.html#redeploy-restore-director)
* [Restore the Tanzu Kubernetes Grid Integrated Edition Control Plane](bbr-restore.html#redeploy-restore-control-plane)
* [Restore Tanzu Kubernetes Grid Integrated Edition Clusters](bbr-restore.html#redeploy-restore-clusters)
### <a id="recover-from-failing-command"></a> Recover from a Failing Command
If the back up fails, follow these steps:
1. Ensure that you set all the parameters in the `backup` command.
1. Ensure the credentials previously obtained are valid.
1. Ensure the deployment that you specify in the BBR command exists.
1. Ensure that the jump box can reach the BOSH Director.
1. Consult [BBR Logging](bbr-logging.html).
1. If you see the error message: `Directory /var/vcap/store/bbr-backup already exists on instance`,
run the appropriate cleanup command. See [Clean Up After a Failed Back Up](#manual-clean) below for more information.
1. If the backup artifact is corrupted, discard the failing artifacts and run the back up again.
### <a id="manual-clean"></a> Clean Up after a Failed Back Up
If your back up process fails, use the BBR cleanup script to clean up the failed run.
<p class="note warning"><strong>Warning</strong>: It is important to run the BBR cleanup script after a
failed BBR back up run. A failed back up run might leave the BBR back up directory on the instance,
causing any subsequent attempts to back up to fail. In addition, BBR might not have run the post-backup scripts,
leaving the instance in a locked state.</p>
* If the TKGI BOSH Director back up failed, run the following BBR cleanup script command to clean up:
```
bbr director --host BOSH-DIRECTOR-IP \
--username bbr --private-key-path PRIVATE-KEY-FILE \
backup-cleanup
```
Where:
* `BOSH-DIRECTOR-IP` is the address of the BOSH Director. If the BOSH Director is public,
`BOSH-DIRECTOR-IP` is a URL, such as `https://my-bosh.xxx.cf-app.com`. Otherwise, this is the internal
IP `BOSH-DIRECTOR-IP` which you can retrieve as show in [Retrieve the BOSH Director Address](#bosh-address) above.
* `PRIVATE-KEY-FILE` is the path to the private key file that you can create from `Bbr Ssh Credentials` as shown in
[Download the BBR SSH Credentials](#bbr-ssh-creds) above. Replace the placeholder text using the
information in the following table.
For example:
```console
$ bbr director --host 10.0.0.5 --username bbr \
--private-key-path private-key.pem \
backup-cleanup
```
* If the TKGI control plane or TKGI clusters back ups fail, run the following BBR cleanup script command to clean up:
```
BOSH_CLIENT_SECRET=BOSH-CLIENT-SECRET \
bbr deployment \
--target BOSH-TARGET \
--username BOSH-CLIENT \
--deployment DEPLOYMENT-NAME \
--ca-cert PATH-TO-BOSH-CA-CERT \
backup-cleanup
```
Where:
* `BOSH-CLIENT-SECRET` is your BOSH client secret. If you do not know your BOSH Client Secret,
open your BOSH Director tile, navigate to **Credentials > Bosh Commandline Credentials** and
record the value for `BOSH_CLIENT_SECRET`.
* `BOSH-TARGET` is your BOSH Environment setting. If you do not know your BOSH Environment setting,
open your BOSH Director tile, navigate to **Credentials > Bosh Commandline Credentials** and
record the value for `BOSH_ENVIRONMENT`. You must be able to reach the target address from the
workstation where you run `bbr` commands.
* `BOSH-CLIENT` is your BOSH Client Name. If you do not know your BOSH Client Name, open your BOSH Director tile,
navigate to **Credentials > Bosh Commandline Credentials** and record the value for `BOSH_CLIENT`.
* `DEPLOYMENT-NAME` is the Tanzu Kubernetes Grid Integrated Edition BOSH deployment name that you located in
the [Locate the Tanzu Kubernetes Grid Integrated Edition Deployment Names](#locate-deploy-name) section above.
* `PATH-TO-BOSH-CA-CERT` is the path to the root CA certificate that you downloaded in
[Download the Root CA Certificate](#root-ca-cert) above.
For example:
```console
$ BOSH_CLIENT_SECRET=p455w0rd bbr deployment \
--target bosh.example.com --username admin --deployment cf-acceptance-0 \
--ca-cert bosh.ca.crt \
backup-cleanup
```
If the cleanup script fails, consult the following table to match the exit codes to an error
message.
<table>
<tr>
<th>Value</th>
<th>Error</th>
</tr>
<tr>
<td>0</td>
<td>Success</td>
</tr>
<tr>
<td>1</td>
<td>General failure</td>
</tr>
<tr>
<td>8</td>
<td>The post-backup unlock failed. One of your deployments might be in a bad state and require
attention.</td>
</tr>
<tr>
<td>16</td>
<td>The cleanup failed. This is a non-fatal error indicating that the utility has been unable
to clean up open BOSH SSH connections to a deployment's VMs. Manual cleanup might be required
to clear any hanging BOSH users and connections.</td>
</tr>
</table>