Skip to content

Commit

Permalink
- [Docs]: Added a separate examples page (WIP)
Browse files Browse the repository at this point in the history
- [Docs]: Added the `LLM as Chatbot` example
  • Loading branch information
peterschmidt85 committed Jul 20, 2023
1 parent 383c1ae commit 416212b
Show file tree
Hide file tree
Showing 10 changed files with 233 additions and 14 deletions.
11 changes: 5 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Cost-effective LLM development

<p align="center">
<a href="https://dstack.ai/docs" target="_blank"><b>Docs</b></a> •
<a href="https://dstack.ai/examples/dolly" target="_blank"><b>Examples</b></a> •
<a href="https://dstack.ai/examples" target="_blank"><b>Examples</b></a> •
<a href="https://dstack.ai/blog" target="_blank"><b>Blog</b></a> •
<a href="https://join.slack.com/t/dstackai/shared_invite/zt-xdnsytie-D4qU9BvJP8vkbkHXdi6clQ" target="_blank"><b>Slack</b></a>
</p>
Expand All @@ -29,10 +29,9 @@ It streamlines development and deployment, reduces cloud costs, and frees users

## Latest news

- [2023/07] 🔥 [Lambda Cloud GA and custom Docker images](https://dstack.ai/blog/2023/07/14/lambda-cloud-ga-and-docker-support/) (Release)
- [2023/06] [Running XGen 7B chatbot in your cloud](https://github.com/dstackai/dstack-examples/wiki/Running-XGen-7B-Chatbot-in-your-cloud) (Example)
- [2023/06] [Running LLM as chatbot in your cloud](https://github.com/dstackai/LLM-As-Chatbot/wiki/Running-LLM-As-Chatbot-in-your-cloud) (Example)
- [2023/06] [New configuration format and CLI experience](https://dstack.ai/blog/2023/06/12/new-configuration-format-and-cli-experience/) (Release)
- [2023/07] [LLM as Chatbot](https://dstack.ai/examples/llmchat) (Example)
- [2023/07] [Lambda Cloud GA and Docker support](https://dstack.ai/blog/2023/07/14/lambda-cloud-ga-and-docker-support/) (Release)
- [2023/06] [New YAML format](https://dstack.ai/blog/2023/06/12/new-configuration-format-and-cli-experience/) (Release)

## Installation

Expand Down Expand Up @@ -129,7 +128,7 @@ Otherwise, you can always specify the profile using `--profile PROFILE`.
For additional information and examples, see the following links:

- [Docs](https://dstack.ai/docs)
- [Examples](https://github.com/dstackai/dstack-examples/blob/main/README.md)
- [Examples](https://dstack.ai/examples)
- [Blog](https://dstack.ai/blog)
- [Slack](https://join.slack.com/t/dstackai/shared_invite/zt-xdnsytie-D4qU9BvJP8vkbkHXdi6clQ)

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/images/dstack-llmchat-gallery.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/images/dstack-llmchat-welcome.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 16 additions & 3 deletions docs/assets/stylesheets/extra.css
Original file line number Diff line number Diff line change
Expand Up @@ -955,13 +955,13 @@ html .md-footer-meta.md-typeset a:is(:focus,:hover) {
visibility: visible;
}

.md-tabs__item:nth-child(3) .md-tabs__link:after {
/*.md-tabs__item:nth-child(3) .md-tabs__link:after {
content: url('data:image/svg+xml,<svg width="16" height="16" viewBox="1 1 27 27" xmlns="http://www.w3.org/2000/svg" fill="rgba(0,0,0,0.87)" stroke="rgba(0,0,0,0.87)" stroke-width="0.75" stroke-linecap="round" stroke-linejoin="round"><path d="M23.5 23.5h-15v-15h4.791V6H6v20h20v-7.969h-2.5z"/><path d="M17.979 6l3.016 3.018-6.829 6.829 1.988 1.987 6.83-6.828L26 14.02V6z"/></svg>');
line-height: 14px;
padding-left: 3px;
}
}*/

.md-tabs__item:nth-child(4) .md-tabs__link:after {
.md-tabs__item:nth-child(5) .md-tabs__link:after {
content: url('data:image/svg+xml,<svg width="16" height="16" viewBox="1 1 27 27" xmlns="http://www.w3.org/2000/svg" fill="rgba(0,0,0,0.87)" stroke="rgba(0,0,0,0.87)" stroke-width="0.75" stroke-linecap="round" stroke-linejoin="round"><path d="M23.5 23.5h-15v-15h4.791V6H6v20h20v-7.969h-2.5z"/><path d="M17.979 6l3.016 3.018-6.829 6.829 1.988 1.987 6.83-6.828L26 14.02V6z"/></svg>');
line-height: 14px;
padding-left: 3px;
Expand Down Expand Up @@ -1190,6 +1190,19 @@ html .md-footer-meta.md-typeset a:is(:focus,:hover) {
background: rgba(0,0,0,0.87);
}

[data-md-color-primary=white] .md-button--github:before {
position: relative;
top: 5px;
content: url('data:image/svg+xml,<svg xmlns="http://www.w3.org/2000/svg" width="23" height="23" viewBox="0 0 24 24" fill="white" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-github"><path d="M9 19c-5 1.5-5-2.5-7-3m14 6v-3.87a3.37 3.37 0 0 0-.94-2.61c3.14-.35 6.44-1.54 6.44-7A5.44 5.44 0 0 0 20 4.77 5.07 5.07 0 0 0 19.91 1S18.73.65 16 2.48a13.38 13.38 0 0 0-7 0C6.27.65 5.09 1 5.09 1A5.07 5.07 0 0 0 5 4.77a5.44 5.44 0 0 0-1.5 3.78c0 5.42 3.3 6.61 6.44 7A3.37 3.37 0 0 0 9 18.13V22"></path></svg>');
padding-right: 10px;
}

[data-md-color-primary=white] .md-button:hover {
background: inherit;
color: inherit;
border-color: inherit;
}

/*
[data-md-color-primary=white] .md-button--primary:hover {
background: rgba(0,0,0,1);
Expand Down
4 changes: 4 additions & 0 deletions docs/assets/stylesheets/landing.css
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,11 @@
border-image: linear-gradient(45deg, #0048ff, #ce00ff) 10;
border-width: 0.75px;
border-style: solid;
}

.tx-landing__highlights_grid > a, .tx-landing__highlights_grid > a:hover {
text-decoration: none;
color: inherit;
}

@media screen and (min-width: 76.1875em) {
Expand Down
8 changes: 8 additions & 0 deletions docs/examples/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
template: examples.html
title: Examples
hide:
- navigation
- toc
- footer
---
156 changes: 156 additions & 0 deletions docs/examples/llmchat.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
# LLM as Chatbot

![dstack-hub-create-azure-project.png](../assets/images/dstack-llmchat-gallery.png){ width=800 }

This [example](https://github.com/deep-diver/LLM-As-Chatbot) is built by Chansung Park. It can run any open-source LLM either as a Gradio chat app or as a Discord bot.
With `dstack`, you can run this Gradio chat app or Discord bot in any cloud with a single command.
To give it a try, follow the instructions below.

## 1. Define a profile

??? info "Prerequisites"
Before running the example, ensure that you have [installed](../docs/installation/pip.md) `dstack` and [configured a project](../docs/guides/projects.md)
to use your preferred cloud account (AWS, GCP, Azure, or Lambda Cloud).

Each LLM model requires specific resources. To inform dstack about the required resources, you need to define a profile
in the `.dstack/profiles.yaml` file within your project.

Each profile must include the project name, and you have the option to specify the GPU name, its memory, instance type,
retry policy, and more. Check the [reference](../docs/reference/profiles.yml.md) for more details.

<div editor-title=".dstack/profiles.yml">

```yaml
profiles:
- name: gcp-t4
project: gcp
resources:
gpu:
name: T4
default: true
```
</div>
If you use this profile, dstack will utilize the project named `gcp` and a cloud instance that has an NVIDIA T4 GPU.

## 2. Run a Gradio app

Here's the configuration that runs the Gradio app:

<div editor-title="gradio.dstack.yml">

```yaml
type: task
env:
# (Optional) Specify your Hugging Face token
- HUGGING_FACE_HUB_TOKEN=
# (Optional) Specify your Serper API Key
- LLMCHAT_SERPER_API_KEY=
ports:
- 6006
commands:
- pip install -r requirements.txt
- LLMCHAT_APP_MODE=GRADIO python entry_point.py
```

</div>

Here's how you run it with `dstack`:

<div class="termy">

```shell
$ dstack run . -f gradio.dstack.yml
dstack will execute the following plan:
CONFIGURATION PROJECT INSTANCE RESOURCES SPOT
gradio.dstack.yml gcp n1-highmem-2 2xCPUs, 13312MB, 1xT4 auto
Continue? [y/n]: y
Provisioning and establishing an SSH tunnel...
Running on local URL: http://127.0.0.1:6006
To interrupt, press Ctrl+C...
```

</div>

After you confirm, `dstack` will provision the cloud instance, run the task, and forward the defined ports to your local
machine for secure and convenient access.

![dstack-hub-create-azure-project.png](../assets/images/dstack-llmchat-welcome.png){ width=800 }

!!! info "NOTE:"
To use a non-default profile, simply specify its name with `--profile NAME` when using `dstack run`.

## 3. Run a Discord bot

Here's the configuration that runs the Gradio app:

<div editor-title="discord.dstack.yml">

```yaml
type: task
env:
# (Required) Specify your Discord bot token.
- DISCORD_BOT_TOKEN=
# (Required) Specify the name of the model. See `README.md`` for supported models.
- DISCORD_BOT_MODEL_NAME=alpaca-lora-7b
# (Optional) Specify your Hugging Face token
- HUGGING_FACE_HUB_TOKEN=
# (Optional) Specify your Serper API Key to enable Internet search support.
- LLMCHAT_SERPER_API_KEY=

commands:
- pip install -r requirements.txt --progress-bar off
- LLMCHAT_APP_MODE=DISCORD python entry_point.py
```
</div>
??? info "How to acquire a Discord bot token"
Before running, ensure you have specified your Discord bot token, which you can obtain from the [Discord Developer
Portal](https://discord.com/developers/docs/intro). If you haven't set up a Discord Bot on the portal yet,
follow the [How to Create a Discord Bot Account](https://www.freecodecamp.org/news/create-a-discord-bot-with-python/)
section of the tutorial from freeCodeCamp.
Finally, here's how you run it with `dstack`:

<div class="termy">

```shell
$ dstack run . -f discord.dstack.yml
dstack will execute the following plan:
CONFIGURATION PROJECT INSTANCE RESOURCES SPOT
discord.dstack.yml gcp n1-highmem-2 2xCPUs, 13312MB, 1xT4 auto
Continue? [y/n]: y
Provisioning...
To interrupt, press Ctrl+C...
```

</div>

Once you confirm, `dstack` will provision the cloud instance and run the task. Once it's up, you can freely send messages
to your bot via Discord.

![dstack-hub-create-azure-project.png](../assets/images/dstack-llmchat-discord-chat.png){ width=800 }

For advanced commands supported by the bot, check the [README](https://github.com/deep-diver/LLM-As-Chatbot#discord-bot) file.

[Source code](https://github.com/deep-diver/LLM-As-Chatbot){ .md-button .md-button--github }

36 changes: 36 additions & 0 deletions docs/overrides/examples.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
{% extends "main.html" %}

{% block content %}
<section class="tx-container">
<div class="md-grid md-typeset">
<div class="-landing__highlights">
<div class="tx-landing__highlights_text">
<h2>Examples</h2>
</div>

<div class="tx-landing__highlights_grid">
<a href="llmchat">
<div class="feature-cell">
<div class="feature-icon">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
<path d="M12 3c5.5 0 10 3.58 10 8s-4.5 8-10 8c-1.24 0-2.43-.18-3.53-.5C5.55 21 2 21 2 21c2.33-2.33 2.7-3.9 2.75-4.5C3.05 15.07 2 13.13 2 11c0-4.42 4.5-8 10-8m5 9v-2h-2v2h2m-4 0v-2h-2v2h2m-4 0v-2H7v2h2Z"></path>
</svg>
</div>
<h3>
LLM as Chatbot
</h3>

<p>
Run an open-source LLM of your choice either as a Gradio chat app or as a Discord bot.
</p>
</div>
</a>
</div>
</div>
</div>
</section>
<br>
<br>
<br>
<br>
{% endblock %}
13 changes: 8 additions & 5 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -142,13 +142,15 @@ nav:
- Installation:
- pip: docs/installation/pip.md
- Docker: docs/installation/docker.md
- HG Spaces: docs/installation/hf-spaces.md
- HF Spaces: docs/installation/hf-spaces.md
- Guides:
- Dev environments: docs/guides/dev-environments.md
- Tasks: docs/guides/tasks.md
# - Artifacts: docs/guides/artifacts.md
- Projects: docs/guides/projects.md
- Reference:
- .dstack.yml: docs/reference/dstack.yml.md
- profiles.yml: docs/reference/profiles.yml.md
- CLI:
- dstack run: docs/reference/cli/run.md
- dstack init: docs/reference/cli/init.md
Expand All @@ -164,17 +166,18 @@ nav:
- dstack secrets: docs/reference/cli/secrets.md
- dstack prune: docs/reference/cli/prune.md
- dstack build: docs/reference/cli/build.md
- .dstack.yml: docs/reference/dstack.yml.md
- profiles.yml: docs/reference/profiles.yml.md
- API:
- Python: docs/reference/api/python.md
- Backends:
- AWS: docs/reference/backends/aws.md
- GCP: docs/reference/backends/gcp.md
- Azure: docs/reference/backends/azure.md
- Lambda: docs/reference/backends/lambda.md
- Examples: https://github.com/dstackai/dstack-examples/blob/main/README.md
- Slack: https://join.slack.com/t/dstackai/shared_invite/zt-xdnsytie-D4qU9BvJP8vkbkHXdi6clQ
- Examples:
- examples/index.md
- Examples:
- LLM as Chatbot: examples/llmchat.md
- Blog:
- blog/index.md
- Slack: https://join.slack.com/t/dstackai/shared_invite/zt-xdnsytie-D4qU9BvJP8vkbkHXdi6clQ
- Twitter: https://twitter.com/dstackai/

0 comments on commit 416212b

Please sign in to comment.