Skip to content

Commit

Permalink
merge staging branch dev-0.5.0, supporting openai v1 and release 0.…
Browse files Browse the repository at this point in the history
…5.0 (#80)

* enable openai v1 in load.py (#78)

* update version to 0.5.0

* update doc for openai v1 changes
  • Loading branch information
wenzhe-log10 authored Jan 16, 2024
1 parent e61b67f commit ebd3e42
Show file tree
Hide file tree
Showing 21 changed files with 239 additions and 221 deletions.
39 changes: 33 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,12 @@ from log10.load import log10
log10(openai)
# all your openai calls are now logged
```
For OpenAI v1, use `from log10.load import OpenAI` instead of `from openai import OpenAI`
```python
from log10.load import OpenAI

client = OpenAI()
```
Access your LLM data at [log10.io](https://log10.io)


Expand All @@ -42,8 +47,12 @@ Log10 offers various integration methods, including a python LLM library wrapper
Pick the one that works best for you.

#### OpenAI
Use library wrapper `log10(openai)` as shown [above](#-what-is-this).
Full script [here](examples/logging/chatcompletion.py).
| log10 ver| openai v0 | openai v1 |
|----------|----------|----------|
| 0.4 | `load(openai)`||
| 0.5 | `load(openai)`| `from log10.load import OpenAI`|

**OpenAI v0** - Use library wrapper `log10(openai)`. Check out `examples/logging` in log10 version `0.4.6`.
```python
import openai
from log10.load import log10
Expand All @@ -52,12 +61,27 @@ log10(openai)
# openai calls are now logged
```

Use Log10 LLM abstraction.
Full script [here](examples/logging/llm_abstraction.py#6-#14).
**OpenAI v1**
> NOTE: We added OpenAI v1 API support in log10 `0.5.0` release. `load.log10(openai)` still works for openai v1.
```python
from log10.load import OpenAI
# from openai import OpenAI

client = OpenAI()
completion = client.completions.create(model="curie", prompt="Once upon a time")
# All completions.create and chat.completions.create calls will be logged
```
Full script [here](examples/logging/completion.py).


**Use Log10 LLM abstraction**

```python
from log10.openai import OpenAI

llm = OpenAI({"model": "gpt-3.5-turbo"}, log10_config=Log10Config())
```
openai v1+ lib required. Full script [here](examples/logging/llm_abstraction.py#6-#14).

#### Anthropic
Use library wrapper `log10(anthropic)`.
Expand All @@ -74,6 +98,7 @@ Use Log10 LLM abstraction.
Full script [here](examples/logging/llm_abstraction.py#16-#19).
```python
from log10.anthropic import Anthropic

llm = Anthropic({"model": "claude-2"}, log10_config=Log10Config())
```

Expand All @@ -85,12 +110,14 @@ Adding other providers is on the roadmap.
**MosaicML** with LLM abstraction. Full script [here](/examples/logging/mosaicml_completion.py).
```python
from log10.mosaicml import MosaicML

llm = MosaicML({"model": "llama2-70b-chat/v1"}, log10_config=Log10Config())
```

**Together** with LLM abstraction. Full script [here](/examples/logging/together_completion.py).
```python
from log10.together import Together

llm = Together({"model": "togethercomputer/llama-2-70b-chat"}, log10_config=Log10Config())
```

Expand Down Expand Up @@ -137,7 +164,7 @@ And provide the following configuration in either a `.env` file, or as environme

### 🧠🔁 Readiness for RLHF & self hosting

Use your data and feedback from users to fine-tune custom models with RLHF with the option of building and deploying more reliable, accurate and efficient self-hosted models.
Use your data and feedback from users to fine-tune custom models with RLHF with the option of building and deploying more reliable, accurate and efficient self-hosted models.

### 👥🤝 Collaboration

Expand All @@ -158,7 +185,7 @@ Create flexible groups to share and collaborate over all of the above features
You can find and run examples under folder `examples`, e.g. run a logging example:
```
python examples/logging/chatcompletion.py
```
```

Also you can run some end-to-end tests with [`xdocttest`](https://github.com/Erotemic/xdoctest) installed (`pip install xdoctest`).

Expand Down
1 change: 1 addition & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ Can be run on `OpenAI` or `Anthropic`
## Logging (and debugging)

### OpenAI
Requires openai >= "1.0.0"

- `chatCompletion_async_vs_sync.py` Compare latencies when logging in async vs sync mode
- `chatCompletion.py` Chat endpoint example
Expand Down
12 changes: 3 additions & 9 deletions examples/logging/chatcompletion.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,9 @@
import os
from log10.load import OpenAI

import openai

from log10.load import log10
client = OpenAI()


log10(openai)

openai.api_key = os.getenv("OPENAI_API_KEY")

completion = openai.ChatCompletion.create(
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
Expand Down
70 changes: 0 additions & 70 deletions examples/logging/chatcompletion_async_vs_sync.py

This file was deleted.

27 changes: 27 additions & 0 deletions examples/logging/chatcompletion_sync.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
from log10.load import OpenAI, log10_session


client = OpenAI()

with log10_session(tags=["log10-io/examples"]):
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "Hello?",
},
],
)
print(completion.choices[0].message)

completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "Hello again, are you there?",
},
],
)
print(completion.choices[0].message)
12 changes: 3 additions & 9 deletions examples/logging/completion.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,9 @@
import os
from log10.load import OpenAI

import openai

from log10.load import log10
client = OpenAI()


log10(openai)

openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.Completion.create(
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt="Write the names of all Star Wars movies and spinoffs along with the time periods in which they were set?",
temperature=0,
Expand Down
12 changes: 3 additions & 9 deletions examples/logging/completion_simple.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,9 @@
import os
from log10.load import OpenAI

import openai

from log10.load import log10
client = OpenAI()


log10(openai, DEBUG_=True, USE_ASYNC_=False)

openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.Completion.create(
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt="What is 2+2?",
temperature=0,
Expand Down
21 changes: 6 additions & 15 deletions examples/logging/get_url.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,12 @@
import os
from log10.load import OpenAI, log10_session

import openai
from langchain.llms import OpenAI

from log10.load import log10, log10_session


log10(openai)

openai.api_key = os.getenv("OPENAI_API_KEY")

llm = OpenAI(temperature=0.9, model_name="gpt-3.5-turbo-instruct")
client = OpenAI()

with log10_session() as session:
print(session.last_completion_url())

response = openai.Completion.create(
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt="Why did the chicken cross the road?",
temperature=0,
Expand All @@ -27,7 +18,7 @@

print(session.last_completion_url())

response = openai.Completion.create(
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt="Why did the cow cross the road?",
temperature=0,
Expand All @@ -42,7 +33,7 @@
with log10_session() as session:
print(session.last_completion_url())

response = openai.Completion.create(
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt="Why did the frog cross the road?",
temperature=0,
Expand All @@ -54,7 +45,7 @@

print(session.last_completion_url())

response = openai.Completion.create(
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt="Why did the scorpion cross the road?",
temperature=0,
Expand Down
3 changes: 0 additions & 3 deletions examples/logging/langchain_multiple_tools.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,10 @@
import os

import openai

from log10.load import log10


log10(openai)

openai.api_key = os.getenv("OPENAI_API_KEY")
MAX_TOKENS = 512
TOOLS_DEFAULT_LIST = ["llm-math", "wikipedia"]

Expand Down
4 changes: 0 additions & 4 deletions examples/logging/langchain_qa.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,10 @@
import os

import openai

from log10.load import log10


log10(openai)

openai.api_key = os.getenv("OPENAI_API_KEY")

# Example from: https://python.langchain.com/en/latest/use_cases/question_answering.html
# Download the state_of_the_union.txt here: https://raw.githubusercontent.com/hwchase17/langchain/master/docs/modules/state_of_the_union.txt
# This example requires: pip install chromadb
Expand Down
4 changes: 0 additions & 4 deletions examples/logging/langchain_simple_sequential.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,10 @@
import os

import openai

from log10.load import log10


log10(openai)

openai.api_key = os.getenv("OPENAI_API_KEY")

from langchain.chains import LLMChain, SimpleSequentialChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
Expand Down
4 changes: 0 additions & 4 deletions examples/logging/langchain_sqlagent.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
import datetime
import os
import random

import openai
Expand Down Expand Up @@ -78,9 +77,6 @@ def generate_random_user():

session.close()

# Setup vars for Langchain
openai.api_key = os.getenv("OPENAI_API_KEY")

# Setup Langchain SQL agent
db = SQLDatabase.from_uri("sqlite:///users.db")
toolkit = SQLDatabaseToolkit(db=db)
Expand Down
12 changes: 3 additions & 9 deletions examples/logging/long_context_exception.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,11 @@
import os
from log10.load import OpenAI

import openai

from log10.load import log10


log10(openai)

openai.api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI()

text_to_repeat = "What is the meaning of life?" * 1000

response = openai.Completion.create(
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt=text_to_repeat,
temperature=0,
Expand Down
Loading

0 comments on commit ebd3e42

Please sign in to comment.