This repository's scope is to present an example of how to hook an arbitrary Zeppelin notebook project into the Banzai Pipeline CD/CI workflow.
The project contains a simple Zeppelin notebook that operates on the San Francisco Police Department Incidents dataset. (The format and the description of the dataset is available here ).
The notebook is a json
file exported from Zeppelin. It's recommended to be edited with the Zeppelin notebook editor as the exported json contains a lot of "noise".
The notebook in this project is made up of a few paragraphs
that:
- configure the spark context
- load and parse the dataset
- render a map for the data to be displayed on
- a few paragraphs that execute select operations with different API calls and display the data
There are two CI/CD flow descriptor templates provided so far, one per cloud provider:
.pipeline.yml.aws.template
for Amazon.pipeline.yml.azure.template
for Azure.pipeline.yml.gke.template
for Google Cloud
The templates are similar as they are made up of the same steps
only differ in the steps
in charge for provisioning the Kubernetes cluster plus in case of Azure the deployment needs your Azure storage account and access key. You will have to add these as secrets, please see below. This is needed for event logging and Spark History Server. In case of Amazon we use Instance Profile access.
- An instance of the Banzai Cloud Control Plane needs to be running and accessible
- Create an S3 bucket for persisting Spark event logs, so that they can be accessed by the Spark History Server. You will have to set the name of this bucket container in the example
yml
as [[your-s3-bucket]]. - The example dataset should be available for the cluster; it needs to be downloaded from the above mentioned location and uploaded to your s3 bucket, then update our example notebook
sf-police-incidents-aws.json
replacing [[your-bucket-name]].
-
An instance of the Banzai Cloud Control Plane needs to be running and accessible
-
The following resources are needed on the Azure cloud:
Resource Group
in one of the locationsStorage Account
, take note of your access key for theStorage Account
, you will have to set [[your-storage-account-name]] and [[your-storage-account-access-key]] assecrets
in later steps.Blob Service
for persisting Spark event logs so that hey can be accessed by the Spark History Server. You will have to set the name of this Blob container in the exampleyml
as [[your-blob-container]].
-
The data needs to be downloaded from the above mentioned location (our smaller data set is also available here) and uploaded to WASB. Create a separate
Blob Service
in the sameStorage Account
created in previous step and upload the data file. Update our example notebooksf-police-incidents-azure.json
replacing [[your-blob-container]], [[your-azure-storage-account]] values with yours.
-
An instance of the Banzai Cloud Control Plane needs to be running and accessible
-
The following resources are needed on Google Cloud:
Project
You'll have to enter the ID of your project as [[your-gke-project-id]] in exampleyml
Storage bucket
for persisting Spark event logs so that hey can be accessed by the Spark History Server. You will have to set the name of this bucket in the exampleyml
as [[your-gs-bucket]].
-
The data needs to be downloaded from the above mentioned location (our smaller data set is also available here) and uploaded to Google Storage. Please create a separate
Storage Bucket
for this. Update our example notebooksf-police-incidents-gke.json
replacing [[your-gs-bucket]] value.
Steps required to hook in into the Banzai Pipeline CI/CD workflow
In order for a project to be part of a Banzai Pipeline CI/CD workflow it must contain a specific configuration file: .pipeline.yml
in it's root folder.
In short: the configuration file contains the steps the project needs to go through the workflow from provisioning the environment, building the code, running tests to being deployed and executed along with project specific variables (eg.:credentials, program arguments, etc needed to assemble the deployment). We reference this file as the CI/CD flow descriptor
-
depending on the chosen cloud provider, rename one of the templates to .pipeline.yml. Update the below properties, depending on cloud type.
- [[your-cluster-name]]
- [[your-s3-bucket]]
- [[your-cluster-name]]
- [[your-azure-cluster-location]]
- [[your-azure-resource-group]]
- [[your-blob-container]]
- [[your-gke-cluster-name]]
- [[your-gke-project-id]]
- [[your-gs-bucket]]
-
navigate to the CI/CD user interface (that usually runs on the Banzai Cloud Control plane instance)
-
enable the project build from the list of available repositories
-
add the following secrets to the build:
PLUGIN_ENDPOINT = [control-plane]/pipeline/api/v1
PLUGIN_TOKEN = "oauthToken"
Credentials for Azure Blob Storage access
PLUGIN_AZURE_STORAGE_ACCOUNT = "[[your-storage-account-name]]"
PLUGIN_AZURE_STORAGE_ACCOUNT_ACCESS_KEY = "[[your-storage-account-access-key]]"
The project is configured now for the Banzai Cloud CI/CD flow. On each commit to the repository a new flow will be triggered. You can check the progress on the CI/CD user interface.
If you'd like to hook in your own notebook into the Banzai Pipeline Workflow:
- add your notebook to the repository: zeppelin-pdi-example/my-notebook.json
- change the configuration file to point to it (see the marked lines below)
run:
....
zeppelin_notebook_name: "my-notebook.json" # <---- change this
zeppelin_notebook_file_path: "my-notebook.json" # <---- change this
....