Skip to content

Commit

Permalink
adding more to the readme
Browse files Browse the repository at this point in the history
  • Loading branch information
ctr26 committed Apr 9, 2024
1 parent a6348bb commit 32c73c4
Showing 1 changed file with 84 additions and 11 deletions.
95 changes: 84 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,94 @@
# Hypha server helm chart
## Hypha Helm Chart

Add the hypha helm chart repo
This document details how to add the Hypha Helm chart repository, install the Hypha chart, manage values files, generate secrets, and check available chart versions. Additionally, it covers dependencies and the configuration for the Triton Inference Server, including how to load models from an S3-compatible service like Minio.

helm repo add hypha https://amun-ai.github.io/hypha-helm-charts/
Adding the Helm Chart Repository

Helm-chart can be found in './charts' and can installed manually with
To add the Hypha Helm chart repository, run the following command:

helm install --generate-name charts/hypha --dependency-update
"""
helm repo add hypha https://amun-ai.github.io/hypha-helm-charts/
"""

A values file can be passed in using:
## Installing the Helm Chart

helm install --values values.yaml --generate-name charts/hypha --dependency-update
You can install the Hypha Helm chart manually by specifying the chart directory and updating dependencies as follows:

Generate a secret:

openssl rand -base64 32
helm install --generate-name hypha/hypha --dependency-update

See what versions of hypha are availiabile
## Using a Values File

To customize the installation, pass in a custom values.yaml file:

helm install --values values.yaml --generate-name hypha/hypha --dependency-update


## Generating a Secret

For creating secrets, such as passwords or keys, use:

"""
openssl rand -base64 32
"""

## Checking Available Versions

To see what versions of the Hypha chart are available, use:

"""
helm search repo hypha --versions
"""

# Dependencies

The chart has dependencies on Minio, Redis, and Triton. Note that these dependencies need to be installed as part of the chart installation; pre-existing installations cannot be used directly.

Triton Inference Server
Getting Models from S3

Models can be stored on an S3-compatible service like Minio. They can be loaded using a path format like s3::/minio/model-repository.

Configuration

In the Triton configuration, you can customise the initContainers and volumes to load models from a storage service.

Example of mounting a volume for the Triton cache:

"""
volumeMounts:
- mountPath: /model-repository
name: triton-cache
subPath: model-repository
"""

Example of an initContainer to synchronise models from an S3-compatible service:

"""
initContainers:
- name: sync
image: minio/mc
command: ["/bin/sh"]
args:
- "-c"
- |
mc alias set bucket s3://bucket/model-repository
mc mirror --overwrite bucket/model-repository/ /model-repository
volumeMounts:
- mountPath: /model-repository
name: triton-cache
subPath: model-repository
"""

## Manually uploading models to the Bioengine

For situations where models need to be updated or added manually without using an S3-compatible service, you can use the kubectl cp command.
This allows you to copy model directories directly from your local machine to the Triton Inference Server pod.
This is not extremely

## Cloud Providers

The chart supports deployment on cloud platforms, including GCP.

TODO examples:

helm search repo hypha --versions

0 comments on commit 32c73c4

Please sign in to comment.