fix(deps): update k8s.io/utils digest to 4693a02 - autoclosed #3196
GitHub Actions / e2e-report
failed
Aug 7, 2023 in 0s
6 tests run, 5 passed, 0 skipped, 1 failed.
Annotations
Check failure on line 1 in e2e/node_modules/bats/man/bats.1
github-actions / e2e-report
bats.Given a PVC, When creating a Backup of an app, Then expect Restic repository
(in test file ./test-03-backup.bats, line 48)
`[ "${output}" = "${expected_content}" ]' failed
Raw output
(in test file ./test-03-backup.bats, line 48)
`[ "${output}" = "${expected_content}" ]' failed
release "k8up" uninstalled
Release "k8up" does not exist. Installing it now.
NAME: k8up
LAST DEPLOYED: Mon Aug 7 08:35:45 2023
NAMESPACE: k8up-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
#####################
! Attention !
#####################
This Helm chart does not include CRDs.
Please make sure you have installed or upgraded the necessary CRDs as instructed in the Chart README.
#####################
A running operator is ready
removed directory './debug/data/pvc-subject'
namespace/k8up-e2e-subject created
The namespace 'k8up-e2e-subject' is ready.
"minio" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "minio" chart repository
Update Complete. ⎈Happy Helming!⎈
Release "minio" does not exist. Installing it now.
NAME: minio
LAST DEPLOYED: Mon Aug 7 08:35:54 2023
NAMESPACE: minio
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Minio can be accessed via port 9000 on the following DNS name from within your cluster:
minio.minio.svc.cluster.local
To access Minio from localhost, run the below commands:
1. export POD_NAME=$(kubectl get pods --namespace minio -l "release=minio" -o jsonpath="{.items[0].metadata.name}")
2. kubectl port-forward $POD_NAME 9000 --namespace minio
Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/
You can now access Minio server on http://localhost:9000. Follow the below steps to connect to Minio server with mc client:
1. Download the Minio mc client - https://docs.minio.io/docs/minio-client-quickstart-guide
2. Get the ACCESS_KEY=$(kubectl get secret minio -o jsonpath="{.data.accesskey}" | base64 --decode) and the SECRET_KEY=$(kubectl get secret minio -o jsonpath="{.data.secretkey}" | base64 --decode)
3. mc alias set minio-local http://localhost:9000 "$ACCESS_KEY" "$SECRET_KEY" --api s3v4
4. mc ls minio-local
Alternately, you can use your browser or the Minio SDK to access the server - https://docs.minio.io/categories/17
S3 Storage is ready
persistentvolumeclaim/subject-pvc created
deployment.apps/subject-deployment created
The subject is ready
secret/backup-credentials created
secret/backup-repo created
backup.k8up.io/k8up-backup created
Valid expression. Verification in progress...
Current value for k8up-backup is <none>...
Current value for k8up-backup is <none>...
Current value for k8up-backup is <none>...
Current value for k8up-backup is <none>...
Current value for k8up-backup is <none>...
Current value for k8up-backup is <none>...
Current value for k8up-backup is <none>...
k8up-backup has the right value (true).
Waiting for 'backup/k8up-backup' in namespace 'k8up-e2e-subject' to become 'completed' ...
backup.k8up.io/k8up-backup condition met
---BEGIN restic snapshots output---
[{"time":"2023-08-07T08:36:46.119165303Z","tree":"df127ae5d45cfb44a30fc470c47dbdfe3d126633ff031508fa467e011c6440f2","paths":["/data/subject-pvc"],"hostname":"k8up-e2e-subject","id":"4de8d4a75342404793286644fe71215758068bd48c8ee6b028084156e784d298","short_id":"4de8d4a7"}]
---END---
Number of Snapshots >= 1? true
---BEGIN actual expected_filename.txt---
warning: couldn't attach to pod/restic-1691397415, falling back to streaming logs: Internal error occurred: error attaching to container: container is in CONTAINER_EXITED state
expected content: 1691397339
---END---
Loading