Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reformat code samples on llm university text-generation section #356

Merged
merged 1 commit into from
Jan 14, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@ To set up, we first import the Cohere module and create a client.

```python PYTHON
import cohere
co = cohere.Client("COHERE_API_KEY") # Your Cohere API key

co = cohere.Client("COHERE_API_KEY") # Your Cohere API key
```

At its most basic, we only need to pass to the Chat endpoint the user message using the `message` parameter – the only required parameter for the endpoint.
Expand Down Expand Up @@ -53,9 +54,11 @@ In the quickstart example, we didn’t have to define a preamble because a defau
Here’s an example. We added a preamble telling the chatbot to assume the persona of an expert public speaking coach. As a result, we get a response that adopts that persona.

```python PYTHON
response = co.chat(message="Hello",
model="command-r-plus",
preamble="You are an expert public speaking coach. Don't use any greetings.")
response = co.chat(
message="Hello",
model="command-r-plus",
preamble="You are an expert public speaking coach. Don't use any greetings.",
)
print(response.text)
```

Expand All @@ -78,13 +81,15 @@ In streaming mode, the endpoint will generate a series of objects. To get the ac
If you have not already, make your own copy of the Google Colaboratory notebook and run the code in this section to see the same example with streamed responses activated.

```python PYTHON
stream = co.chat_stream(message="Hello. I'd like to learn about techniques for effective audience engagement",
model="command-r-plus",
preamble="You are an expert public speaking coach")
stream = co.chat_stream(
message="Hello. I'd like to learn about techniques for effective audience engagement",
model="command-r-plus",
preamble="You are an expert public speaking coach",
)

for event in stream:
if event.event_type == "text-generation":
print(event.text, end='')
print(event.text, end="")
```

```
Expand Down Expand Up @@ -149,21 +154,23 @@ while True:
message = input("User: ")

# Typing "quit" ends the conversation
if message.lower() == 'quit':
if message.lower() == "quit":
print("Ending chat.")
break

# Chatbot response
stream = co.chat_stream(message=message,
model="command-r-plus",
preamble=preamble,
conversation_id=conversation_id)
stream = co.chat_stream(
message=message,
model="command-r-plus",
preamble=preamble,
conversation_id=conversation_id,
)

print("Chatbot: ", end='')
print("Chatbot: ", end="")

for event in stream:
if event.event_type == "text-generation":
print(event.text, end='')
print(event.text, end="")
if event.event_type == "stream-end":
chat_history = event.response.chat_history

Expand Down Expand Up @@ -237,12 +244,12 @@ The chat history is a list of multiple turns of messages from the user and the c
from cohere import ChatMessage

chat_history = [
ChatMessage(role="USER", message="What is 2 + 2"),
ChatMessage(role="CHATBOT", message="The answer is 4"),
ChatMessage(role="USER", message="Add 5 to that number"),
ChatMessage(role="CHATBOT", message="Sure. The answer is 9"),
...
]
ChatMessage(role="USER", message="What is 2 + 2"),
ChatMessage(role="CHATBOT", message="The answer is 4"),
ChatMessage(role="USER", message="Add 5 to that number"),
ChatMessage(role="CHATBOT", message="Sure. The answer is 9"),
...,
]
```

The following modifies the previous implementation by using `chat_history` instead of `conversation_id` for managing the conversation history.
Expand All @@ -262,29 +269,33 @@ while True:
message = input("User: ")

# Typing "quit" ends the conversation
if message.lower() == 'quit':
if message.lower() == "quit":
print("Ending chat.")
break

# Chatbot response
stream = co.chat_stream(message=message,
model="command-r-plus",
preamble=preamble,
chat_history=chat_history)
stream = co.chat_stream(
message=message,
model="command-r-plus",
preamble=preamble,
chat_history=chat_history,
)

chatbot_response = ""
print("Chatbot: ", end='')
print("Chatbot: ", end="")

for event in stream:
if event.event_type == "text-generation":
print(event.text, end='')
print(event.text, end="")
chatbot_response += event.text
print("\n")

# Add to chat history
chat_history.extend(
[ChatMessage(role="USER", message=message),
ChatMessage(role="CHATBOT", message=chatbot_response)]
[
ChatMessage(role="USER", message=message),
ChatMessage(role="CHATBOT", message=chatbot_response),
]
)
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -134,9 +134,10 @@ Using a custom model is as simple as substituting the baseline model with the mo

```python PYTHON
response = co.generate(
model='26db2994-cf88-4243-898d-31258411c120-ft', # REPLACE WITH YOUR MODEL ID
prompt="""Turn the following message to a virtual assistant into the correct action:
Send a message to Alison to ask if she can pick me up tonight to go to the concert together""")
model="26db2994-cf88-4243-898d-31258411c120-ft", # REPLACE WITH YOUR MODEL ID
prompt="""Turn the following message to a virtual assistant into the correct action:
Send a message to Alison to ask if she can pick me up tonight to go to the concert together""",
)
```

But of course, we need to know if this model is performing better than the baseline in the first place. For this, there are a couple of ways we can evaluate our model.
Expand Down Expand Up @@ -178,17 +179,19 @@ We run the following code for each of the baseline (`command`) and the finetuned

```python PYTHON
# Create a function to call the endpoint
def generate_text(prompt,temperature,num_gens):
response = co.generate(
model='command', # Repeat with the custom model
prompt=prompt,
temperature=temperature,
num_generations = num_gens,
stop_sequences=["\n\n"])
return response
def generate_text(prompt, temperature, num_gens):
response = co.generate(
model="command", # Repeat with the custom model
prompt=prompt,
temperature=temperature,
num_generations=num_gens,
stop_sequences=["\n\n"],
)
return response


# Define the prompt
prompt="""Turn the following message to a virtual assistant into the correct action:
prompt = """Turn the following message to a virtual assistant into the correct action:
Send a message to Alison to ask if she can pick me up tonight to go to the concert together"""

# Define the range of temperature values and num_generations
Expand All @@ -198,14 +201,14 @@ num_gens = 3
# Iterate generation over the range of temperature values
print(f"Temperature range: {temperatures}")
for temperature in temperatures:
response = generate_text(prompt,temperature,num_gens)
print("-"*10)
print(f'Temperature: {temperature}')
print("-"*10)
for i in range(3):
text = response.generations[i].text
print(f'Generation #{i+1}')
print(f'Text: {text}\n')
response = generate_text(prompt, temperature, num_gens)
print("-" * 10)
print(f"Temperature: {temperature}")
print("-" * 10)
for i in range(3):
text = response.generations[i].text
print(f"Generation #{i+1}")
print(f"Text: {text}\n")
```

Here are the responses.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,21 +108,20 @@ user_message = "Make the text coherent: Pimelodella kronei is a species of three
preamble = "You are a writing assistant that helps the user write coherent text."

# Get default model response
response_pretrained=co.chat(
message=user_message,
preamble=preamble,
)
response_pretrained = co.chat(
message=user_message,
preamble=preamble,
)

# Get fine-tuned model response
response_finetuned = co.chat(
message=user_message,
model='acb944bb-fb49-4c29-a15b-e6a245a7bdf9-ft',
preamble=preamble,
)
message=user_message,
model="acb944bb-fb49-4c29-a15b-e6a245a7bdf9-ft",
preamble=preamble,
)

print(f"Default response: {response_pretrained.text}","\n-----")
print(f"Default response: {response_pretrained.text}", "\n-----")
print(f"Fine-tuned response: {response_finetuned.text}")

```

For this example, the output appears as follows:
Expand Down Expand Up @@ -159,14 +158,14 @@ while True:

# Typing "quit" ends the conversation
if message.lower() == 'quit':
print("Ending chat.")
break
print("Ending chat.")
break

# Chatbot response
stream = co.chat_stream(message=message,
model='acb944bb-fb49-4c29-a15b-e6a245a7bdf9-ft',
preamble=preamble,
conversation_id=conversation_id)
model='acb944bb-fb49-4c29-a15b-e6a245a7bdf9-ft',
preamble=preamble,
conversation_id=conversation_id)

print("Chatbot: ", end='')

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@ To set up, we first import the Cohere module and create a client.

```python PYTHON
import cohere
co = cohere.Client("COHERE_API_KEY") # Your Cohere API key

co = cohere.Client("COHERE_API_KEY") # Your Cohere API key
```

## Model Type
Expand All @@ -35,8 +36,7 @@ With the [Chat endpoint](/reference/chat) , you can choose from several variatio
Use the `model` parameter to select a variation that suits your requirements. In the code cell, we select `command-r`.

```python PYTHON
response = co.chat(message="Hello",
model="command-r-plus")
response = co.chat(message="Hello", model="command-r-plus")
print(response.text)
```

Expand Down Expand Up @@ -80,9 +80,9 @@ message = """Suggest a more exciting title for a blog post titled: Intro to Retr
Respond in a single line."""

for _ in range(5):
response = co.chat(message=message,
temperature=0,
model="command-r-plus")
response = co.chat(
message=message, temperature=0, model="command-r-plus"
)
print(response.text)
```

Expand All @@ -103,9 +103,9 @@ message = """Suggest a more exciting title for a blog post titled: Intro to Retr
Respond in a single line."""

for _ in range(5):
response = co.chat(message=message,
temperature=1,
model="command-r-plus")
response = co.chat(
message=message, temperature=1, model="command-r-plus"
)
print(response.text)
```

Expand Down
Loading
Loading