Skip to content

Commit

Permalink
Updating OpenAI embedding example to use gateways to work better with… (
Browse files Browse the repository at this point in the history
#2)

… docker run
  • Loading branch information
cdbartholomew authored Jan 21, 2024
1 parent 012e520 commit 7d32049
Show file tree
Hide file tree
Showing 3 changed files with 52 additions and 2 deletions.
25 changes: 25 additions & 0 deletions examples/applications/compute-openai-embeddings/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Computing text embeddings with Open AI

This sample application shows how to use Open AI to compute text embeddings by calling the API.

## Configure you OpenAI API Key

Export to the ENV the access key to OpenAI

```
export OPEN_AI_ACCESS_KEY=...
```

## Deploy the LangStream application

```
./bin/langstream docker run test -app examples/applications/compute-openai-embeddings -s examples/secrets/secrets.yaml
```

## Talk with the Chat bot using the CLI
Since the application opens a gateway, we can use the gateway API to send and consume messages.

```
./bin/langstream gateway chat test -cg output -pg input -p sessionId=$(uuidgen)
```

21 changes: 21 additions & 0 deletions examples/applications/compute-openai-embeddings/gateways.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
gateways:
- id: "input"
type: produce
topic: "input-topic"
parameters:
- sessionId
produceOptions:
headers:
- key: langstream-client-session-id
valueFromParameters: sessionId

- id: "output"
type: consume
topic: "output-topic"
parameters:
- sessionId
consumeOptions:
filters:
headers:
- key: langstream-client-session-id
valueFromParameters: sessionId
8 changes: 6 additions & 2 deletions examples/applications/compute-openai-embeddings/pipeline.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,19 @@ topics:
errors:
on-failure: "skip"
pipeline:
- name: "convert-to-structure"
input: "input-topic"
type: "document-to-json"
configuration:
text-field: "text"
- name: "compute-embeddings"
id: "step1"
type: "compute-ai-embeddings"
input: "input-topic"
output: "output-topic"
configuration:
model: "${secrets.open-ai.embeddings-model}" # This needs to match the name of the model deployment, not the base model
embeddings-field: "value.embeddings"
text: "{{ value.name }} {{ value.description }}"
text: "{{ value.text }}"
batch-size: 10
# this is in milliseconds. It is important to take this value into consideration when using this agent in the chat response pipeline
# in fact this value impacts the latency of the response
Expand Down

0 comments on commit 7d32049

Please sign in to comment.