-
Notifications
You must be signed in to change notification settings - Fork 67
Noobaa
NooBaa can collapse multiple storage silos into a single, scalable storage fabric, by its ability to virtualize any local storage, whether shared or dedicated, physical or virtual and include both private and public cloud storage, using the same S3 API and management tools.
More information here and here
Make sure that DLF is up and running on dlf
namespace. Do a quick check with kubectl get pods -n dlf
From the root directory of the repo execute:
./examples/noobaa/noobaa_install.sh
It will install Noobaa and load some sample data, it should take few minutes. The output of the above command should be similar to this:
$ ./examples/noobaa/noobaa_install.sh
Downloading NooBaa CLI...done
Installing NooBaa...
Noobaa pods ready!
Installed NooBaa
Creating Backing Store
Created Backing Store
Backingstore ready!
Delete Bucket Class
Delete Bucket Class
Creating Bucket Class
Created Bucket Class
done
Creating test OBC...done
Loading data to example bucket...job.batch/example-noobaa-data condition met
done
We have a helper script to populate the Dataset yaml for you. We are using as a template the file examples/templates/example-dataset-s3.yaml to populate the one using the connection details for Noobaa.
Execute the script:
./examples/noobaa/create_dataset_desc.sh
Now you should see the new file created in ./examples/noobaa/dataset-noobaa.yaml
and it would look like this:
apiVersion: com.ie.ibm.hpsys/v1alpha1
kind: Dataset
metadata:
name: example-dataset
spec:
local:
type: "COS"
accessKeyID: "IGZs7cUToNaR6GKquj2y"
secretAccessKey: "6nXOum0HWwk+8TgS9eyQCppJHzv6MpSDBIV28jiZ"
endpoint: "http://192.168.99.254:32664"
bucket: "my-bucket-eb7b15ee-20d9-4ca6-9591-b9472cecc69a"
region: "" #it can be empty
Now you can create the dataset (as a pointer to the Noobaa bucket) as follows: kubectl create -f ./examples/noobaa/dataset-noobaa.yaml
Create a pod that uses that dataset:
apiVersion: v1 kind: Pod metadata: name: nginx labels: dataset.0.id: "example-dataset" dataset.0.useas: "mount" spec: containers: - name: nginx image: nginx
Once the pod starts you can exec into it and find file1.txt
and file2.txt
under /mnt/datasets/example-dataset
directory.
Any changes you make from within the pod to that directory would be reflected on the Bucket created by Noobaa